2026-02-27 00:00:09.107398 | Job console starting 2026-02-27 00:00:09.124156 | Updating git repos 2026-02-27 00:00:09.277704 | Cloning repos into workspace 2026-02-27 00:00:09.535389 | Restoring repo states 2026-02-27 00:00:09.570348 | Merging changes 2026-02-27 00:00:09.570371 | Checking out repos 2026-02-27 00:00:09.946386 | Preparing playbooks 2026-02-27 00:00:11.060417 | Running Ansible setup 2026-02-27 00:00:21.358347 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-02-27 00:00:23.034111 | 2026-02-27 00:00:23.034252 | PLAY [Base pre] 2026-02-27 00:00:23.076789 | 2026-02-27 00:00:23.076915 | TASK [Setup log path fact] 2026-02-27 00:00:23.124073 | orchestrator | ok 2026-02-27 00:00:23.146084 | 2026-02-27 00:00:23.146197 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-27 00:00:23.224602 | orchestrator | ok 2026-02-27 00:00:23.234063 | 2026-02-27 00:00:23.234149 | TASK [emit-job-header : Print job information] 2026-02-27 00:00:23.295693 | # Job Information 2026-02-27 00:00:23.295848 | Ansible Version: 2.16.14 2026-02-27 00:00:23.295876 | Job: testbed-deploy-stable-in-a-nutshell-with-tempest-ubuntu-24.04 2026-02-27 00:00:23.295904 | Pipeline: periodic-midnight 2026-02-27 00:00:23.295923 | Executor: 521e9411259a 2026-02-27 00:00:23.295940 | Triggered by: https://github.com/osism/testbed 2026-02-27 00:00:23.295958 | Event ID: 5a5cbd8274634c52955f09a6a4608bdc 2026-02-27 00:00:23.302137 | 2026-02-27 00:00:23.302224 | LOOP [emit-job-header : Print node information] 2026-02-27 00:00:23.532936 | orchestrator | ok: 2026-02-27 00:00:23.533125 | orchestrator | # Node Information 2026-02-27 00:00:23.533156 | orchestrator | Inventory Hostname: orchestrator 2026-02-27 00:00:23.533177 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-02-27 00:00:23.533195 | orchestrator | Username: zuul-testbed02 2026-02-27 00:00:23.533212 | orchestrator | Distro: Debian 12.13 2026-02-27 00:00:23.533232 | orchestrator | Provider: static-testbed 2026-02-27 00:00:23.533249 | orchestrator | Region: 2026-02-27 00:00:23.533267 | orchestrator | Label: testbed-orchestrator 2026-02-27 00:00:23.533284 | orchestrator | Product Name: OpenStack Nova 2026-02-27 00:00:23.533300 | orchestrator | Interface IP: 81.163.193.140 2026-02-27 00:00:23.542619 | 2026-02-27 00:00:23.542704 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-02-27 00:00:24.722513 | orchestrator -> localhost | changed 2026-02-27 00:00:24.729438 | 2026-02-27 00:00:24.729530 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-02-27 00:00:26.972287 | orchestrator -> localhost | changed 2026-02-27 00:00:26.993036 | 2026-02-27 00:00:26.993137 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-02-27 00:00:27.638347 | orchestrator -> localhost | ok 2026-02-27 00:00:27.644390 | 2026-02-27 00:00:27.644478 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-02-27 00:00:27.672807 | orchestrator | ok 2026-02-27 00:00:27.689664 | orchestrator | included: /var/lib/zuul/builds/be8ea8ba42aa40fdb20d16250c2e0f7e/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-02-27 00:00:27.696346 | 2026-02-27 00:00:27.696419 | TASK [add-build-sshkey : Create Temp SSH key] 2026-02-27 00:00:33.702744 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-02-27 00:00:33.702974 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/be8ea8ba42aa40fdb20d16250c2e0f7e/work/be8ea8ba42aa40fdb20d16250c2e0f7e_id_rsa 2026-02-27 00:00:33.703033 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/be8ea8ba42aa40fdb20d16250c2e0f7e/work/be8ea8ba42aa40fdb20d16250c2e0f7e_id_rsa.pub 2026-02-27 00:00:33.703058 | orchestrator -> localhost | The key fingerprint is: 2026-02-27 00:00:33.703081 | orchestrator -> localhost | SHA256:ADIYLl7NE4c1o6DXX3mCKI1PHAhR3XSLnUQdw2K8Vg4 zuul-build-sshkey 2026-02-27 00:00:33.703100 | orchestrator -> localhost | The key's randomart image is: 2026-02-27 00:00:33.703127 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-02-27 00:00:33.703147 | orchestrator -> localhost | |.=*o+o=*o+oo. | 2026-02-27 00:00:33.703165 | orchestrator -> localhost | |o .+O+*.OE++. | 2026-02-27 00:00:33.703182 | orchestrator -> localhost | |.o = @.o.B*. | 2026-02-27 00:00:33.703198 | orchestrator -> localhost | |o o + o..oo. | 2026-02-27 00:00:33.703214 | orchestrator -> localhost | | . . .S | 2026-02-27 00:00:33.703234 | orchestrator -> localhost | | | 2026-02-27 00:00:33.703250 | orchestrator -> localhost | | | 2026-02-27 00:00:33.703266 | orchestrator -> localhost | | | 2026-02-27 00:00:33.703283 | orchestrator -> localhost | | | 2026-02-27 00:00:33.703300 | orchestrator -> localhost | +----[SHA256]-----+ 2026-02-27 00:00:33.703347 | orchestrator -> localhost | ok: Runtime: 0:00:04.756824 2026-02-27 00:00:33.709671 | 2026-02-27 00:00:33.709742 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-02-27 00:00:33.747507 | orchestrator | ok 2026-02-27 00:00:33.771112 | orchestrator | included: /var/lib/zuul/builds/be8ea8ba42aa40fdb20d16250c2e0f7e/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-02-27 00:00:33.791239 | 2026-02-27 00:00:33.791317 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-02-27 00:00:33.826080 | orchestrator | skipping: Conditional result was False 2026-02-27 00:00:33.832844 | 2026-02-27 00:00:33.832923 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-02-27 00:00:34.594062 | orchestrator | changed 2026-02-27 00:00:34.603853 | 2026-02-27 00:00:34.603941 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-02-27 00:00:34.917596 | orchestrator | ok 2026-02-27 00:00:34.922765 | 2026-02-27 00:00:34.925910 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-02-27 00:00:35.476833 | orchestrator | ok 2026-02-27 00:00:35.485851 | 2026-02-27 00:00:35.485938 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-02-27 00:00:35.985393 | orchestrator | ok 2026-02-27 00:00:35.990407 | 2026-02-27 00:00:35.990493 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-02-27 00:00:36.050180 | orchestrator | skipping: Conditional result was False 2026-02-27 00:00:36.056653 | 2026-02-27 00:00:36.056752 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-02-27 00:00:37.136316 | orchestrator -> localhost | changed 2026-02-27 00:00:37.150397 | 2026-02-27 00:00:37.150490 | TASK [add-build-sshkey : Add back temp key] 2026-02-27 00:00:37.866779 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/be8ea8ba42aa40fdb20d16250c2e0f7e/work/be8ea8ba42aa40fdb20d16250c2e0f7e_id_rsa (zuul-build-sshkey) 2026-02-27 00:00:37.866984 | orchestrator -> localhost | ok: Runtime: 0:00:00.011318 2026-02-27 00:00:37.872772 | 2026-02-27 00:00:37.872853 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-02-27 00:00:38.546248 | orchestrator | ok 2026-02-27 00:00:38.559179 | 2026-02-27 00:00:38.559280 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-02-27 00:00:38.602216 | orchestrator | skipping: Conditional result was False 2026-02-27 00:00:38.686381 | 2026-02-27 00:00:38.686488 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-02-27 00:00:39.256811 | orchestrator | ok 2026-02-27 00:00:39.278902 | 2026-02-27 00:00:39.279041 | TASK [validate-host : Define zuul_info_dir fact] 2026-02-27 00:00:39.377355 | orchestrator | ok 2026-02-27 00:00:39.384779 | 2026-02-27 00:00:39.384874 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-02-27 00:00:40.547065 | orchestrator -> localhost | ok 2026-02-27 00:00:40.555531 | 2026-02-27 00:00:40.555640 | TASK [validate-host : Collect information about the host] 2026-02-27 00:00:42.112821 | orchestrator | ok 2026-02-27 00:00:42.145545 | 2026-02-27 00:00:42.145678 | TASK [validate-host : Sanitize hostname] 2026-02-27 00:00:42.340241 | orchestrator | ok 2026-02-27 00:00:42.349553 | 2026-02-27 00:00:42.349667 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-02-27 00:00:44.167432 | orchestrator -> localhost | changed 2026-02-27 00:00:44.174037 | 2026-02-27 00:00:44.174131 | TASK [validate-host : Collect information about zuul worker] 2026-02-27 00:00:44.913923 | orchestrator | ok 2026-02-27 00:00:44.923511 | 2026-02-27 00:00:44.923623 | TASK [validate-host : Write out all zuul information for each host] 2026-02-27 00:00:46.320163 | orchestrator -> localhost | changed 2026-02-27 00:00:46.332646 | 2026-02-27 00:00:46.332760 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-02-27 00:00:46.635202 | orchestrator | ok 2026-02-27 00:00:46.647266 | 2026-02-27 00:00:46.647382 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-02-27 00:02:12.917740 | orchestrator | changed: 2026-02-27 00:02:12.918352 | orchestrator | .d..t...... src/ 2026-02-27 00:02:12.918555 | orchestrator | .d..t...... src/github.com/ 2026-02-27 00:02:12.918636 | orchestrator | .d..t...... src/github.com/osism/ 2026-02-27 00:02:12.918704 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-02-27 00:02:12.918768 | orchestrator | RedHat.yml 2026-02-27 00:02:12.947496 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-02-27 00:02:12.947517 | orchestrator | RedHat.yml 2026-02-27 00:02:12.947586 | orchestrator | = 1.53.0"... 2026-02-27 00:02:25.787380 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-02-27 00:02:25.950660 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-02-27 00:02:26.383873 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-02-27 00:02:26.448500 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-02-27 00:02:27.116613 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-02-27 00:02:27.182486 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-02-27 00:02:27.681560 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-02-27 00:02:27.681636 | orchestrator | 2026-02-27 00:02:27.681643 | orchestrator | Providers are signed by their developers. 2026-02-27 00:02:27.681649 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-02-27 00:02:27.681653 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-02-27 00:02:27.681659 | orchestrator | 2026-02-27 00:02:27.681664 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-02-27 00:02:27.681668 | orchestrator | selections it made above. Include this file in your version control repository 2026-02-27 00:02:27.681679 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-02-27 00:02:27.681683 | orchestrator | you run "tofu init" in the future. 2026-02-27 00:02:27.681959 | orchestrator | 2026-02-27 00:02:27.681968 | orchestrator | OpenTofu has been successfully initialized! 2026-02-27 00:02:27.681972 | orchestrator | 2026-02-27 00:02:27.681976 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-02-27 00:02:27.681980 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-02-27 00:02:27.681984 | orchestrator | should now work. 2026-02-27 00:02:27.681988 | orchestrator | 2026-02-27 00:02:27.681991 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-02-27 00:02:27.682034 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-02-27 00:02:27.682043 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-02-27 00:02:27.837076 | orchestrator | Created and switched to workspace "ci"! 2026-02-27 00:02:27.837145 | orchestrator | 2026-02-27 00:02:27.837154 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-02-27 00:02:27.837161 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-02-27 00:02:27.837167 | orchestrator | for this configuration. 2026-02-27 00:02:28.024046 | orchestrator | ci.auto.tfvars 2026-02-27 00:02:28.027761 | orchestrator | default_custom.tf 2026-02-27 00:02:29.070108 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-02-27 00:02:29.632477 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-02-27 00:02:29.853185 | orchestrator | 2026-02-27 00:02:29.853255 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-02-27 00:02:29.853264 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-02-27 00:02:29.853309 | orchestrator | + create 2026-02-27 00:02:29.853329 | orchestrator | <= read (data resources) 2026-02-27 00:02:29.853342 | orchestrator | 2026-02-27 00:02:29.853347 | orchestrator | OpenTofu will perform the following actions: 2026-02-27 00:02:29.853457 | orchestrator | 2026-02-27 00:02:29.853472 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-02-27 00:02:29.853477 | orchestrator | # (config refers to values not yet known) 2026-02-27 00:02:29.853481 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-02-27 00:02:29.853485 | orchestrator | + checksum = (known after apply) 2026-02-27 00:02:29.853489 | orchestrator | + created_at = (known after apply) 2026-02-27 00:02:29.853494 | orchestrator | + file = (known after apply) 2026-02-27 00:02:29.853497 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.853516 | orchestrator | + metadata = (known after apply) 2026-02-27 00:02:29.853520 | orchestrator | + min_disk_gb = (known after apply) 2026-02-27 00:02:29.853524 | orchestrator | + min_ram_mb = (known after apply) 2026-02-27 00:02:29.853529 | orchestrator | + most_recent = true 2026-02-27 00:02:29.853533 | orchestrator | + name = (known after apply) 2026-02-27 00:02:29.853536 | orchestrator | + protected = (known after apply) 2026-02-27 00:02:29.853540 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.853546 | orchestrator | + schema = (known after apply) 2026-02-27 00:02:29.853550 | orchestrator | + size_bytes = (known after apply) 2026-02-27 00:02:29.853554 | orchestrator | + tags = (known after apply) 2026-02-27 00:02:29.853558 | orchestrator | + updated_at = (known after apply) 2026-02-27 00:02:29.853562 | orchestrator | } 2026-02-27 00:02:29.853641 | orchestrator | 2026-02-27 00:02:29.853653 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-02-27 00:02:29.853658 | orchestrator | # (config refers to values not yet known) 2026-02-27 00:02:29.853662 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-02-27 00:02:29.853666 | orchestrator | + checksum = (known after apply) 2026-02-27 00:02:29.853670 | orchestrator | + created_at = (known after apply) 2026-02-27 00:02:29.853673 | orchestrator | + file = (known after apply) 2026-02-27 00:02:29.853677 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.853681 | orchestrator | + metadata = (known after apply) 2026-02-27 00:02:29.853685 | orchestrator | + min_disk_gb = (known after apply) 2026-02-27 00:02:29.853688 | orchestrator | + min_ram_mb = (known after apply) 2026-02-27 00:02:29.853692 | orchestrator | + most_recent = true 2026-02-27 00:02:29.853696 | orchestrator | + name = (known after apply) 2026-02-27 00:02:29.853700 | orchestrator | + protected = (known after apply) 2026-02-27 00:02:29.853704 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.853707 | orchestrator | + schema = (known after apply) 2026-02-27 00:02:29.853711 | orchestrator | + size_bytes = (known after apply) 2026-02-27 00:02:29.853715 | orchestrator | + tags = (known after apply) 2026-02-27 00:02:29.853718 | orchestrator | + updated_at = (known after apply) 2026-02-27 00:02:29.853722 | orchestrator | } 2026-02-27 00:02:29.853801 | orchestrator | 2026-02-27 00:02:29.853813 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-02-27 00:02:29.853817 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-02-27 00:02:29.853822 | orchestrator | + content = (known after apply) 2026-02-27 00:02:29.853826 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-27 00:02:29.853830 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-27 00:02:29.853834 | orchestrator | + content_md5 = (known after apply) 2026-02-27 00:02:29.853838 | orchestrator | + content_sha1 = (known after apply) 2026-02-27 00:02:29.853842 | orchestrator | + content_sha256 = (known after apply) 2026-02-27 00:02:29.853846 | orchestrator | + content_sha512 = (known after apply) 2026-02-27 00:02:29.853850 | orchestrator | + directory_permission = "0777" 2026-02-27 00:02:29.853854 | orchestrator | + file_permission = "0644" 2026-02-27 00:02:29.853858 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-02-27 00:02:29.853862 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.853865 | orchestrator | } 2026-02-27 00:02:29.853933 | orchestrator | 2026-02-27 00:02:29.853945 | orchestrator | # local_file.id_rsa_pub will be created 2026-02-27 00:02:29.853949 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-02-27 00:02:29.853953 | orchestrator | + content = (known after apply) 2026-02-27 00:02:29.853957 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-27 00:02:29.853960 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-27 00:02:29.853964 | orchestrator | + content_md5 = (known after apply) 2026-02-27 00:02:29.853968 | orchestrator | + content_sha1 = (known after apply) 2026-02-27 00:02:29.853972 | orchestrator | + content_sha256 = (known after apply) 2026-02-27 00:02:29.853975 | orchestrator | + content_sha512 = (known after apply) 2026-02-27 00:02:29.853979 | orchestrator | + directory_permission = "0777" 2026-02-27 00:02:29.853983 | orchestrator | + file_permission = "0644" 2026-02-27 00:02:29.854042 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-02-27 00:02:29.854049 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.854053 | orchestrator | } 2026-02-27 00:02:29.854148 | orchestrator | 2026-02-27 00:02:29.854173 | orchestrator | # local_file.inventory will be created 2026-02-27 00:02:29.854178 | orchestrator | + resource "local_file" "inventory" { 2026-02-27 00:02:29.854182 | orchestrator | + content = (known after apply) 2026-02-27 00:02:29.854186 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-27 00:02:29.854189 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-27 00:02:29.854193 | orchestrator | + content_md5 = (known after apply) 2026-02-27 00:02:29.854197 | orchestrator | + content_sha1 = (known after apply) 2026-02-27 00:02:29.854203 | orchestrator | + content_sha256 = (known after apply) 2026-02-27 00:02:29.854210 | orchestrator | + content_sha512 = (known after apply) 2026-02-27 00:02:29.854215 | orchestrator | + directory_permission = "0777" 2026-02-27 00:02:29.854221 | orchestrator | + file_permission = "0644" 2026-02-27 00:02:29.854226 | orchestrator | + filename = "inventory.ci" 2026-02-27 00:02:29.854232 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.854238 | orchestrator | } 2026-02-27 00:02:29.854325 | orchestrator | 2026-02-27 00:02:29.854338 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-02-27 00:02:29.854343 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-02-27 00:02:29.854347 | orchestrator | + content = (sensitive value) 2026-02-27 00:02:29.854351 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-27 00:02:29.854354 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-27 00:02:29.854358 | orchestrator | + content_md5 = (known after apply) 2026-02-27 00:02:29.854362 | orchestrator | + content_sha1 = (known after apply) 2026-02-27 00:02:29.854366 | orchestrator | + content_sha256 = (known after apply) 2026-02-27 00:02:29.854370 | orchestrator | + content_sha512 = (known after apply) 2026-02-27 00:02:29.854374 | orchestrator | + directory_permission = "0700" 2026-02-27 00:02:29.854378 | orchestrator | + file_permission = "0600" 2026-02-27 00:02:29.854381 | orchestrator | + filename = ".id_rsa.ci" 2026-02-27 00:02:29.854385 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.854389 | orchestrator | } 2026-02-27 00:02:29.854409 | orchestrator | 2026-02-27 00:02:29.854420 | orchestrator | # null_resource.node_semaphore will be created 2026-02-27 00:02:29.854424 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-02-27 00:02:29.854428 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.854432 | orchestrator | } 2026-02-27 00:02:29.854499 | orchestrator | 2026-02-27 00:02:29.854510 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-02-27 00:02:29.854515 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-02-27 00:02:29.854519 | orchestrator | + attachment = (known after apply) 2026-02-27 00:02:29.854522 | orchestrator | + availability_zone = "nova" 2026-02-27 00:02:29.854526 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.854530 | orchestrator | + image_id = (known after apply) 2026-02-27 00:02:29.854534 | orchestrator | + metadata = (known after apply) 2026-02-27 00:02:29.854538 | orchestrator | + name = "testbed-volume-manager-base" 2026-02-27 00:02:29.854541 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.854545 | orchestrator | + size = 80 2026-02-27 00:02:29.854549 | orchestrator | + volume_retype_policy = "never" 2026-02-27 00:02:29.854553 | orchestrator | + volume_type = "ssd" 2026-02-27 00:02:29.854556 | orchestrator | } 2026-02-27 00:02:29.854618 | orchestrator | 2026-02-27 00:02:29.854629 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-02-27 00:02:29.854633 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-27 00:02:29.854637 | orchestrator | + attachment = (known after apply) 2026-02-27 00:02:29.854641 | orchestrator | + availability_zone = "nova" 2026-02-27 00:02:29.854645 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.854654 | orchestrator | + image_id = (known after apply) 2026-02-27 00:02:29.854658 | orchestrator | + metadata = (known after apply) 2026-02-27 00:02:29.854662 | orchestrator | + name = "testbed-volume-0-node-base" 2026-02-27 00:02:29.854666 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.854670 | orchestrator | + size = 80 2026-02-27 00:02:29.854674 | orchestrator | + volume_retype_policy = "never" 2026-02-27 00:02:29.854677 | orchestrator | + volume_type = "ssd" 2026-02-27 00:02:29.854681 | orchestrator | } 2026-02-27 00:02:29.854742 | orchestrator | 2026-02-27 00:02:29.854753 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-02-27 00:02:29.854757 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-27 00:02:29.854761 | orchestrator | + attachment = (known after apply) 2026-02-27 00:02:29.854765 | orchestrator | + availability_zone = "nova" 2026-02-27 00:02:29.854769 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.854772 | orchestrator | + image_id = (known after apply) 2026-02-27 00:02:29.854776 | orchestrator | + metadata = (known after apply) 2026-02-27 00:02:29.854780 | orchestrator | + name = "testbed-volume-1-node-base" 2026-02-27 00:02:29.854784 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.854788 | orchestrator | + size = 80 2026-02-27 00:02:29.854791 | orchestrator | + volume_retype_policy = "never" 2026-02-27 00:02:29.854795 | orchestrator | + volume_type = "ssd" 2026-02-27 00:02:29.854799 | orchestrator | } 2026-02-27 00:02:29.854859 | orchestrator | 2026-02-27 00:02:29.854870 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-02-27 00:02:29.854875 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-27 00:02:29.854879 | orchestrator | + attachment = (known after apply) 2026-02-27 00:02:29.854882 | orchestrator | + availability_zone = "nova" 2026-02-27 00:02:29.854886 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.854890 | orchestrator | + image_id = (known after apply) 2026-02-27 00:02:29.854894 | orchestrator | + metadata = (known after apply) 2026-02-27 00:02:29.854897 | orchestrator | + name = "testbed-volume-2-node-base" 2026-02-27 00:02:29.854901 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.854905 | orchestrator | + size = 80 2026-02-27 00:02:29.854909 | orchestrator | + volume_retype_policy = "never" 2026-02-27 00:02:29.854913 | orchestrator | + volume_type = "ssd" 2026-02-27 00:02:29.854916 | orchestrator | } 2026-02-27 00:02:29.854974 | orchestrator | 2026-02-27 00:02:29.854985 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-02-27 00:02:29.854989 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-27 00:02:29.855010 | orchestrator | + attachment = (known after apply) 2026-02-27 00:02:29.855014 | orchestrator | + availability_zone = "nova" 2026-02-27 00:02:29.855018 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.855022 | orchestrator | + image_id = (known after apply) 2026-02-27 00:02:29.855026 | orchestrator | + metadata = (known after apply) 2026-02-27 00:02:29.855033 | orchestrator | + name = "testbed-volume-3-node-base" 2026-02-27 00:02:29.855037 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.855041 | orchestrator | + size = 80 2026-02-27 00:02:29.855045 | orchestrator | + volume_retype_policy = "never" 2026-02-27 00:02:29.855049 | orchestrator | + volume_type = "ssd" 2026-02-27 00:02:29.855053 | orchestrator | } 2026-02-27 00:02:29.855111 | orchestrator | 2026-02-27 00:02:29.855122 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-02-27 00:02:29.855126 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-27 00:02:29.855130 | orchestrator | + attachment = (known after apply) 2026-02-27 00:02:29.855134 | orchestrator | + availability_zone = "nova" 2026-02-27 00:02:29.855138 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.855146 | orchestrator | + image_id = (known after apply) 2026-02-27 00:02:29.855150 | orchestrator | + metadata = (known after apply) 2026-02-27 00:02:29.855154 | orchestrator | + name = "testbed-volume-4-node-base" 2026-02-27 00:02:29.855157 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.855161 | orchestrator | + size = 80 2026-02-27 00:02:29.855165 | orchestrator | + volume_retype_policy = "never" 2026-02-27 00:02:29.855169 | orchestrator | + volume_type = "ssd" 2026-02-27 00:02:29.855172 | orchestrator | } 2026-02-27 00:02:29.855233 | orchestrator | 2026-02-27 00:02:29.855244 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-02-27 00:02:29.855248 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-27 00:02:29.855252 | orchestrator | + attachment = (known after apply) 2026-02-27 00:02:29.855256 | orchestrator | + availability_zone = "nova" 2026-02-27 00:02:29.855259 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.855263 | orchestrator | + image_id = (known after apply) 2026-02-27 00:02:29.855267 | orchestrator | + metadata = (known after apply) 2026-02-27 00:02:29.855271 | orchestrator | + name = "testbed-volume-5-node-base" 2026-02-27 00:02:29.855275 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.855278 | orchestrator | + size = 80 2026-02-27 00:02:29.855282 | orchestrator | + volume_retype_policy = "never" 2026-02-27 00:02:29.855286 | orchestrator | + volume_type = "ssd" 2026-02-27 00:02:29.855290 | orchestrator | } 2026-02-27 00:02:29.855350 | orchestrator | 2026-02-27 00:02:29.855365 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-02-27 00:02:29.855373 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-27 00:02:29.855379 | orchestrator | + attachment = (known after apply) 2026-02-27 00:02:29.855385 | orchestrator | + availability_zone = "nova" 2026-02-27 00:02:29.855391 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.855398 | orchestrator | + metadata = (known after apply) 2026-02-27 00:02:29.855404 | orchestrator | + name = "testbed-volume-0-node-3" 2026-02-27 00:02:29.855408 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.855411 | orchestrator | + size = 20 2026-02-27 00:02:29.855415 | orchestrator | + volume_retype_policy = "never" 2026-02-27 00:02:29.855419 | orchestrator | + volume_type = "ssd" 2026-02-27 00:02:29.855423 | orchestrator | } 2026-02-27 00:02:29.855494 | orchestrator | 2026-02-27 00:02:29.855511 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-02-27 00:02:29.855516 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-27 00:02:29.855520 | orchestrator | + attachment = (known after apply) 2026-02-27 00:02:29.855524 | orchestrator | + availability_zone = "nova" 2026-02-27 00:02:29.855528 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.855532 | orchestrator | + metadata = (known after apply) 2026-02-27 00:02:29.855535 | orchestrator | + name = "testbed-volume-1-node-4" 2026-02-27 00:02:29.855539 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.855543 | orchestrator | + size = 20 2026-02-27 00:02:29.855547 | orchestrator | + volume_retype_policy = "never" 2026-02-27 00:02:29.855551 | orchestrator | + volume_type = "ssd" 2026-02-27 00:02:29.855554 | orchestrator | } 2026-02-27 00:02:29.855613 | orchestrator | 2026-02-27 00:02:29.855624 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-02-27 00:02:29.855629 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-27 00:02:29.855632 | orchestrator | + attachment = (known after apply) 2026-02-27 00:02:29.855636 | orchestrator | + availability_zone = "nova" 2026-02-27 00:02:29.855640 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.855644 | orchestrator | + metadata = (known after apply) 2026-02-27 00:02:29.855647 | orchestrator | + name = "testbed-volume-2-node-5" 2026-02-27 00:02:29.855651 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.855660 | orchestrator | + size = 20 2026-02-27 00:02:29.855663 | orchestrator | + volume_retype_policy = "never" 2026-02-27 00:02:29.855667 | orchestrator | + volume_type = "ssd" 2026-02-27 00:02:29.855671 | orchestrator | } 2026-02-27 00:02:29.855728 | orchestrator | 2026-02-27 00:02:29.855739 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-02-27 00:02:29.855744 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-27 00:02:29.855747 | orchestrator | + attachment = (known after apply) 2026-02-27 00:02:29.855751 | orchestrator | + availability_zone = "nova" 2026-02-27 00:02:29.855755 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.855759 | orchestrator | + metadata = (known after apply) 2026-02-27 00:02:29.855763 | orchestrator | + name = "testbed-volume-3-node-3" 2026-02-27 00:02:29.855766 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.855770 | orchestrator | + size = 20 2026-02-27 00:02:29.855774 | orchestrator | + volume_retype_policy = "never" 2026-02-27 00:02:29.855778 | orchestrator | + volume_type = "ssd" 2026-02-27 00:02:29.855781 | orchestrator | } 2026-02-27 00:02:29.855838 | orchestrator | 2026-02-27 00:02:29.855850 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-02-27 00:02:29.855854 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-27 00:02:29.855858 | orchestrator | + attachment = (known after apply) 2026-02-27 00:02:29.855862 | orchestrator | + availability_zone = "nova" 2026-02-27 00:02:29.855866 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.855869 | orchestrator | + metadata = (known after apply) 2026-02-27 00:02:29.855873 | orchestrator | + name = "testbed-volume-4-node-4" 2026-02-27 00:02:29.855877 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.855884 | orchestrator | + size = 20 2026-02-27 00:02:29.855888 | orchestrator | + volume_retype_policy = "never" 2026-02-27 00:02:29.855892 | orchestrator | + volume_type = "ssd" 2026-02-27 00:02:29.855896 | orchestrator | } 2026-02-27 00:02:29.855954 | orchestrator | 2026-02-27 00:02:29.855965 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-02-27 00:02:29.855970 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-27 00:02:29.855974 | orchestrator | + attachment = (known after apply) 2026-02-27 00:02:29.855977 | orchestrator | + availability_zone = "nova" 2026-02-27 00:02:29.855981 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.855985 | orchestrator | + metadata = (known after apply) 2026-02-27 00:02:29.855989 | orchestrator | + name = "testbed-volume-5-node-5" 2026-02-27 00:02:29.856047 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.856051 | orchestrator | + size = 20 2026-02-27 00:02:29.856055 | orchestrator | + volume_retype_policy = "never" 2026-02-27 00:02:29.856059 | orchestrator | + volume_type = "ssd" 2026-02-27 00:02:29.856063 | orchestrator | } 2026-02-27 00:02:29.856141 | orchestrator | 2026-02-27 00:02:29.856158 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-02-27 00:02:29.856165 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-27 00:02:29.856169 | orchestrator | + attachment = (known after apply) 2026-02-27 00:02:29.856173 | orchestrator | + availability_zone = "nova" 2026-02-27 00:02:29.856176 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.856180 | orchestrator | + metadata = (known after apply) 2026-02-27 00:02:29.856184 | orchestrator | + name = "testbed-volume-6-node-3" 2026-02-27 00:02:29.856188 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.856191 | orchestrator | + size = 20 2026-02-27 00:02:29.856195 | orchestrator | + volume_retype_policy = "never" 2026-02-27 00:02:29.856199 | orchestrator | + volume_type = "ssd" 2026-02-27 00:02:29.856203 | orchestrator | } 2026-02-27 00:02:29.856288 | orchestrator | 2026-02-27 00:02:29.856305 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-02-27 00:02:29.856311 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-27 00:02:29.856324 | orchestrator | + attachment = (known after apply) 2026-02-27 00:02:29.856328 | orchestrator | + availability_zone = "nova" 2026-02-27 00:02:29.856332 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.856336 | orchestrator | + metadata = (known after apply) 2026-02-27 00:02:29.856339 | orchestrator | + name = "testbed-volume-7-node-4" 2026-02-27 00:02:29.856343 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.856347 | orchestrator | + size = 20 2026-02-27 00:02:29.856351 | orchestrator | + volume_retype_policy = "never" 2026-02-27 00:02:29.856355 | orchestrator | + volume_type = "ssd" 2026-02-27 00:02:29.856358 | orchestrator | } 2026-02-27 00:02:29.856422 | orchestrator | 2026-02-27 00:02:29.856433 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-02-27 00:02:29.856438 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-27 00:02:29.856441 | orchestrator | + attachment = (known after apply) 2026-02-27 00:02:29.856445 | orchestrator | + availability_zone = "nova" 2026-02-27 00:02:29.856449 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.856453 | orchestrator | + metadata = (known after apply) 2026-02-27 00:02:29.856457 | orchestrator | + name = "testbed-volume-8-node-5" 2026-02-27 00:02:29.856461 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.856464 | orchestrator | + size = 20 2026-02-27 00:02:29.856468 | orchestrator | + volume_retype_policy = "never" 2026-02-27 00:02:29.856472 | orchestrator | + volume_type = "ssd" 2026-02-27 00:02:29.856476 | orchestrator | } 2026-02-27 00:02:29.856674 | orchestrator | 2026-02-27 00:02:29.856687 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-02-27 00:02:29.856692 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-02-27 00:02:29.856696 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-27 00:02:29.856700 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-27 00:02:29.856705 | orchestrator | + all_metadata = (known after apply) 2026-02-27 00:02:29.856713 | orchestrator | + all_tags = (known after apply) 2026-02-27 00:02:29.856717 | orchestrator | + availability_zone = "nova" 2026-02-27 00:02:29.856721 | orchestrator | + config_drive = true 2026-02-27 00:02:29.856724 | orchestrator | + created = (known after apply) 2026-02-27 00:02:29.856728 | orchestrator | + flavor_id = (known after apply) 2026-02-27 00:02:29.856733 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-02-27 00:02:29.856736 | orchestrator | + force_delete = false 2026-02-27 00:02:29.856740 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-27 00:02:29.856744 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.856748 | orchestrator | + image_id = (known after apply) 2026-02-27 00:02:29.856753 | orchestrator | + image_name = (known after apply) 2026-02-27 00:02:29.856760 | orchestrator | + key_pair = "testbed" 2026-02-27 00:02:29.856765 | orchestrator | + name = "testbed-manager" 2026-02-27 00:02:29.856768 | orchestrator | + power_state = "active" 2026-02-27 00:02:29.856772 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.856776 | orchestrator | + security_groups = (known after apply) 2026-02-27 00:02:29.856780 | orchestrator | + stop_before_destroy = false 2026-02-27 00:02:29.856784 | orchestrator | + updated = (known after apply) 2026-02-27 00:02:29.856787 | orchestrator | + user_data = (sensitive value) 2026-02-27 00:02:29.856791 | orchestrator | 2026-02-27 00:02:29.856795 | orchestrator | + block_device { 2026-02-27 00:02:29.856800 | orchestrator | + boot_index = 0 2026-02-27 00:02:29.856806 | orchestrator | + delete_on_termination = false 2026-02-27 00:02:29.856816 | orchestrator | + destination_type = "volume" 2026-02-27 00:02:29.856820 | orchestrator | + multiattach = false 2026-02-27 00:02:29.856824 | orchestrator | + source_type = "volume" 2026-02-27 00:02:29.856827 | orchestrator | + uuid = (known after apply) 2026-02-27 00:02:29.856836 | orchestrator | } 2026-02-27 00:02:29.856840 | orchestrator | 2026-02-27 00:02:29.856844 | orchestrator | + network { 2026-02-27 00:02:29.856850 | orchestrator | + access_network = false 2026-02-27 00:02:29.856857 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-27 00:02:29.856861 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-27 00:02:29.856865 | orchestrator | + mac = (known after apply) 2026-02-27 00:02:29.856869 | orchestrator | + name = (known after apply) 2026-02-27 00:02:29.856873 | orchestrator | + port = (known after apply) 2026-02-27 00:02:29.856876 | orchestrator | + uuid = (known after apply) 2026-02-27 00:02:29.856880 | orchestrator | } 2026-02-27 00:02:29.856884 | orchestrator | } 2026-02-27 00:02:29.857102 | orchestrator | 2026-02-27 00:02:29.857115 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-02-27 00:02:29.857120 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-27 00:02:29.857124 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-27 00:02:29.857127 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-27 00:02:29.857131 | orchestrator | + all_metadata = (known after apply) 2026-02-27 00:02:29.857135 | orchestrator | + all_tags = (known after apply) 2026-02-27 00:02:29.857139 | orchestrator | + availability_zone = "nova" 2026-02-27 00:02:29.857143 | orchestrator | + config_drive = true 2026-02-27 00:02:29.857146 | orchestrator | + created = (known after apply) 2026-02-27 00:02:29.857150 | orchestrator | + flavor_id = (known after apply) 2026-02-27 00:02:29.857154 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-27 00:02:29.857158 | orchestrator | + force_delete = false 2026-02-27 00:02:29.857161 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-27 00:02:29.857165 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.857169 | orchestrator | + image_id = (known after apply) 2026-02-27 00:02:29.857173 | orchestrator | + image_name = (known after apply) 2026-02-27 00:02:29.857177 | orchestrator | + key_pair = "testbed" 2026-02-27 00:02:29.857180 | orchestrator | + name = "testbed-node-0" 2026-02-27 00:02:29.857184 | orchestrator | + power_state = "active" 2026-02-27 00:02:29.857188 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.857192 | orchestrator | + security_groups = (known after apply) 2026-02-27 00:02:29.857195 | orchestrator | + stop_before_destroy = false 2026-02-27 00:02:29.857199 | orchestrator | + updated = (known after apply) 2026-02-27 00:02:29.857203 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-27 00:02:29.857207 | orchestrator | 2026-02-27 00:02:29.857211 | orchestrator | + block_device { 2026-02-27 00:02:29.857215 | orchestrator | + boot_index = 0 2026-02-27 00:02:29.857218 | orchestrator | + delete_on_termination = false 2026-02-27 00:02:29.857222 | orchestrator | + destination_type = "volume" 2026-02-27 00:02:29.857226 | orchestrator | + multiattach = false 2026-02-27 00:02:29.857230 | orchestrator | + source_type = "volume" 2026-02-27 00:02:29.857234 | orchestrator | + uuid = (known after apply) 2026-02-27 00:02:29.857237 | orchestrator | } 2026-02-27 00:02:29.857241 | orchestrator | 2026-02-27 00:02:29.857245 | orchestrator | + network { 2026-02-27 00:02:29.857249 | orchestrator | + access_network = false 2026-02-27 00:02:29.857252 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-27 00:02:29.857256 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-27 00:02:29.857260 | orchestrator | + mac = (known after apply) 2026-02-27 00:02:29.857264 | orchestrator | + name = (known after apply) 2026-02-27 00:02:29.857268 | orchestrator | + port = (known after apply) 2026-02-27 00:02:29.857271 | orchestrator | + uuid = (known after apply) 2026-02-27 00:02:29.857275 | orchestrator | } 2026-02-27 00:02:29.857279 | orchestrator | } 2026-02-27 00:02:29.857503 | orchestrator | 2026-02-27 00:02:29.857516 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-02-27 00:02:29.857521 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-27 00:02:29.857525 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-27 00:02:29.857536 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-27 00:02:29.857540 | orchestrator | + all_metadata = (known after apply) 2026-02-27 00:02:29.857544 | orchestrator | + all_tags = (known after apply) 2026-02-27 00:02:29.857548 | orchestrator | + availability_zone = "nova" 2026-02-27 00:02:29.857551 | orchestrator | + config_drive = true 2026-02-27 00:02:29.857555 | orchestrator | + created = (known after apply) 2026-02-27 00:02:29.857559 | orchestrator | + flavor_id = (known after apply) 2026-02-27 00:02:29.857563 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-27 00:02:29.857566 | orchestrator | + force_delete = false 2026-02-27 00:02:29.857570 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-27 00:02:29.857574 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.857578 | orchestrator | + image_id = (known after apply) 2026-02-27 00:02:29.857581 | orchestrator | + image_name = (known after apply) 2026-02-27 00:02:29.857585 | orchestrator | + key_pair = "testbed" 2026-02-27 00:02:29.857589 | orchestrator | + name = "testbed-node-1" 2026-02-27 00:02:29.857593 | orchestrator | + power_state = "active" 2026-02-27 00:02:29.857597 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.857600 | orchestrator | + security_groups = (known after apply) 2026-02-27 00:02:29.857604 | orchestrator | + stop_before_destroy = false 2026-02-27 00:02:29.857608 | orchestrator | + updated = (known after apply) 2026-02-27 00:02:29.857612 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-27 00:02:29.857616 | orchestrator | 2026-02-27 00:02:29.857619 | orchestrator | + block_device { 2026-02-27 00:02:29.857623 | orchestrator | + boot_index = 0 2026-02-27 00:02:29.857627 | orchestrator | + delete_on_termination = false 2026-02-27 00:02:29.857631 | orchestrator | + destination_type = "volume" 2026-02-27 00:02:29.857634 | orchestrator | + multiattach = false 2026-02-27 00:02:29.857638 | orchestrator | + source_type = "volume" 2026-02-27 00:02:29.857642 | orchestrator | + uuid = (known after apply) 2026-02-27 00:02:29.857646 | orchestrator | } 2026-02-27 00:02:29.857649 | orchestrator | 2026-02-27 00:02:29.857653 | orchestrator | + network { 2026-02-27 00:02:29.857657 | orchestrator | + access_network = false 2026-02-27 00:02:29.857661 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-27 00:02:29.857665 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-27 00:02:29.857668 | orchestrator | + mac = (known after apply) 2026-02-27 00:02:29.857672 | orchestrator | + name = (known after apply) 2026-02-27 00:02:29.857676 | orchestrator | + port = (known after apply) 2026-02-27 00:02:29.857680 | orchestrator | + uuid = (known after apply) 2026-02-27 00:02:29.857683 | orchestrator | } 2026-02-27 00:02:29.857687 | orchestrator | } 2026-02-27 00:02:29.857867 | orchestrator | 2026-02-27 00:02:29.857879 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-02-27 00:02:29.857883 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-27 00:02:29.857887 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-27 00:02:29.857891 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-27 00:02:29.857896 | orchestrator | + all_metadata = (known after apply) 2026-02-27 00:02:29.857900 | orchestrator | + all_tags = (known after apply) 2026-02-27 00:02:29.857906 | orchestrator | + availability_zone = "nova" 2026-02-27 00:02:29.857910 | orchestrator | + config_drive = true 2026-02-27 00:02:29.857914 | orchestrator | + created = (known after apply) 2026-02-27 00:02:29.857918 | orchestrator | + flavor_id = (known after apply) 2026-02-27 00:02:29.857921 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-27 00:02:29.857925 | orchestrator | + force_delete = false 2026-02-27 00:02:29.857929 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-27 00:02:29.857933 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.857937 | orchestrator | + image_id = (known after apply) 2026-02-27 00:02:29.857944 | orchestrator | + image_name = (known after apply) 2026-02-27 00:02:29.857948 | orchestrator | + key_pair = "testbed" 2026-02-27 00:02:29.857951 | orchestrator | + name = "testbed-node-2" 2026-02-27 00:02:29.857955 | orchestrator | + power_state = "active" 2026-02-27 00:02:29.857959 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.857963 | orchestrator | + security_groups = (known after apply) 2026-02-27 00:02:29.857966 | orchestrator | + stop_before_destroy = false 2026-02-27 00:02:29.857970 | orchestrator | + updated = (known after apply) 2026-02-27 00:02:29.857974 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-27 00:02:29.857978 | orchestrator | 2026-02-27 00:02:29.857982 | orchestrator | + block_device { 2026-02-27 00:02:29.857985 | orchestrator | + boot_index = 0 2026-02-27 00:02:29.857989 | orchestrator | + delete_on_termination = false 2026-02-27 00:02:29.858044 | orchestrator | + destination_type = "volume" 2026-02-27 00:02:29.858049 | orchestrator | + multiattach = false 2026-02-27 00:02:29.858053 | orchestrator | + source_type = "volume" 2026-02-27 00:02:29.858056 | orchestrator | + uuid = (known after apply) 2026-02-27 00:02:29.858060 | orchestrator | } 2026-02-27 00:02:29.858064 | orchestrator | 2026-02-27 00:02:29.858068 | orchestrator | + network { 2026-02-27 00:02:29.858072 | orchestrator | + access_network = false 2026-02-27 00:02:29.858076 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-27 00:02:29.858080 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-27 00:02:29.858084 | orchestrator | + mac = (known after apply) 2026-02-27 00:02:29.858087 | orchestrator | + name = (known after apply) 2026-02-27 00:02:29.858091 | orchestrator | + port = (known after apply) 2026-02-27 00:02:29.858095 | orchestrator | + uuid = (known after apply) 2026-02-27 00:02:29.858099 | orchestrator | } 2026-02-27 00:02:29.858103 | orchestrator | } 2026-02-27 00:02:29.858282 | orchestrator | 2026-02-27 00:02:29.858294 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-02-27 00:02:29.858298 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-27 00:02:29.858302 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-27 00:02:29.858306 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-27 00:02:29.858310 | orchestrator | + all_metadata = (known after apply) 2026-02-27 00:02:29.858314 | orchestrator | + all_tags = (known after apply) 2026-02-27 00:02:29.858318 | orchestrator | + availability_zone = "nova" 2026-02-27 00:02:29.858321 | orchestrator | + config_drive = true 2026-02-27 00:02:29.858325 | orchestrator | + created = (known after apply) 2026-02-27 00:02:29.858329 | orchestrator | + flavor_id = (known after apply) 2026-02-27 00:02:29.858333 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-27 00:02:29.858337 | orchestrator | + force_delete = false 2026-02-27 00:02:29.858340 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-27 00:02:29.858344 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.858348 | orchestrator | + image_id = (known after apply) 2026-02-27 00:02:29.858352 | orchestrator | + image_name = (known after apply) 2026-02-27 00:02:29.858356 | orchestrator | + key_pair = "testbed" 2026-02-27 00:02:29.858359 | orchestrator | + name = "testbed-node-3" 2026-02-27 00:02:29.858363 | orchestrator | + power_state = "active" 2026-02-27 00:02:29.858367 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.858371 | orchestrator | + security_groups = (known after apply) 2026-02-27 00:02:29.858375 | orchestrator | + stop_before_destroy = false 2026-02-27 00:02:29.858378 | orchestrator | + updated = (known after apply) 2026-02-27 00:02:29.858382 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-27 00:02:29.858386 | orchestrator | 2026-02-27 00:02:29.858390 | orchestrator | + block_device { 2026-02-27 00:02:29.858397 | orchestrator | + boot_index = 0 2026-02-27 00:02:29.858401 | orchestrator | + delete_on_termination = false 2026-02-27 00:02:29.858405 | orchestrator | + destination_type = "volume" 2026-02-27 00:02:29.858412 | orchestrator | + multiattach = false 2026-02-27 00:02:29.858416 | orchestrator | + source_type = "volume" 2026-02-27 00:02:29.858419 | orchestrator | + uuid = (known after apply) 2026-02-27 00:02:29.858423 | orchestrator | } 2026-02-27 00:02:29.858427 | orchestrator | 2026-02-27 00:02:29.858431 | orchestrator | + network { 2026-02-27 00:02:29.858435 | orchestrator | + access_network = false 2026-02-27 00:02:29.858438 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-27 00:02:29.858442 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-27 00:02:29.858446 | orchestrator | + mac = (known after apply) 2026-02-27 00:02:29.858450 | orchestrator | + name = (known after apply) 2026-02-27 00:02:29.858454 | orchestrator | + port = (known after apply) 2026-02-27 00:02:29.858457 | orchestrator | + uuid = (known after apply) 2026-02-27 00:02:29.858461 | orchestrator | } 2026-02-27 00:02:29.858465 | orchestrator | } 2026-02-27 00:02:29.858638 | orchestrator | 2026-02-27 00:02:29.858649 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-02-27 00:02:29.858654 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-27 00:02:29.858658 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-27 00:02:29.858662 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-27 00:02:29.858665 | orchestrator | + all_metadata = (known after apply) 2026-02-27 00:02:29.858669 | orchestrator | + all_tags = (known after apply) 2026-02-27 00:02:29.858673 | orchestrator | + availability_zone = "nova" 2026-02-27 00:02:29.858677 | orchestrator | + config_drive = true 2026-02-27 00:02:29.858680 | orchestrator | + created = (known after apply) 2026-02-27 00:02:29.858684 | orchestrator | + flavor_id = (known after apply) 2026-02-27 00:02:29.858688 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-27 00:02:29.858692 | orchestrator | + force_delete = false 2026-02-27 00:02:29.858696 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-27 00:02:29.858702 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.858708 | orchestrator | + image_id = (known after apply) 2026-02-27 00:02:29.858714 | orchestrator | + image_name = (known after apply) 2026-02-27 00:02:29.858721 | orchestrator | + key_pair = "testbed" 2026-02-27 00:02:29.858725 | orchestrator | + name = "testbed-node-4" 2026-02-27 00:02:29.858729 | orchestrator | + power_state = "active" 2026-02-27 00:02:29.858733 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.858736 | orchestrator | + security_groups = (known after apply) 2026-02-27 00:02:29.858740 | orchestrator | + stop_before_destroy = false 2026-02-27 00:02:29.858744 | orchestrator | + updated = (known after apply) 2026-02-27 00:02:29.858748 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-27 00:02:29.858751 | orchestrator | 2026-02-27 00:02:29.858755 | orchestrator | + block_device { 2026-02-27 00:02:29.858759 | orchestrator | + boot_index = 0 2026-02-27 00:02:29.858763 | orchestrator | + delete_on_termination = false 2026-02-27 00:02:29.858768 | orchestrator | + destination_type = "volume" 2026-02-27 00:02:29.858774 | orchestrator | + multiattach = false 2026-02-27 00:02:29.858779 | orchestrator | + source_type = "volume" 2026-02-27 00:02:29.858785 | orchestrator | + uuid = (known after apply) 2026-02-27 00:02:29.858790 | orchestrator | } 2026-02-27 00:02:29.858797 | orchestrator | 2026-02-27 00:02:29.858801 | orchestrator | + network { 2026-02-27 00:02:29.858805 | orchestrator | + access_network = false 2026-02-27 00:02:29.858808 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-27 00:02:29.858812 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-27 00:02:29.858816 | orchestrator | + mac = (known after apply) 2026-02-27 00:02:29.858819 | orchestrator | + name = (known after apply) 2026-02-27 00:02:29.858823 | orchestrator | + port = (known after apply) 2026-02-27 00:02:29.858827 | orchestrator | + uuid = (known after apply) 2026-02-27 00:02:29.858831 | orchestrator | } 2026-02-27 00:02:29.858834 | orchestrator | } 2026-02-27 00:02:29.859038 | orchestrator | 2026-02-27 00:02:29.859055 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-02-27 00:02:29.859062 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-27 00:02:29.859068 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-27 00:02:29.859074 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-27 00:02:29.859081 | orchestrator | + all_metadata = (known after apply) 2026-02-27 00:02:29.859086 | orchestrator | + all_tags = (known after apply) 2026-02-27 00:02:29.859092 | orchestrator | + availability_zone = "nova" 2026-02-27 00:02:29.859098 | orchestrator | + config_drive = true 2026-02-27 00:02:29.859104 | orchestrator | + created = (known after apply) 2026-02-27 00:02:29.859111 | orchestrator | + flavor_id = (known after apply) 2026-02-27 00:02:29.859115 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-27 00:02:29.859119 | orchestrator | + force_delete = false 2026-02-27 00:02:29.859130 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-27 00:02:29.859137 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.859142 | orchestrator | + image_id = (known after apply) 2026-02-27 00:02:29.859146 | orchestrator | + image_name = (known after apply) 2026-02-27 00:02:29.859150 | orchestrator | + key_pair = "testbed" 2026-02-27 00:02:29.859154 | orchestrator | + name = "testbed-node-5" 2026-02-27 00:02:29.859157 | orchestrator | + power_state = "active" 2026-02-27 00:02:29.859161 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.859165 | orchestrator | + security_groups = (known after apply) 2026-02-27 00:02:29.859169 | orchestrator | + stop_before_destroy = false 2026-02-27 00:02:29.859173 | orchestrator | + updated = (known after apply) 2026-02-27 00:02:29.859177 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-27 00:02:29.859180 | orchestrator | 2026-02-27 00:02:29.859184 | orchestrator | + block_device { 2026-02-27 00:02:29.859188 | orchestrator | + boot_index = 0 2026-02-27 00:02:29.859192 | orchestrator | + delete_on_termination = false 2026-02-27 00:02:29.859195 | orchestrator | + destination_type = "volume" 2026-02-27 00:02:29.859199 | orchestrator | + multiattach = false 2026-02-27 00:02:29.859203 | orchestrator | + source_type = "volume" 2026-02-27 00:02:29.859207 | orchestrator | + uuid = (known after apply) 2026-02-27 00:02:29.859210 | orchestrator | } 2026-02-27 00:02:29.859214 | orchestrator | 2026-02-27 00:02:29.859218 | orchestrator | + network { 2026-02-27 00:02:29.859222 | orchestrator | + access_network = false 2026-02-27 00:02:29.859226 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-27 00:02:29.859229 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-27 00:02:29.859233 | orchestrator | + mac = (known after apply) 2026-02-27 00:02:29.859237 | orchestrator | + name = (known after apply) 2026-02-27 00:02:29.859243 | orchestrator | + port = (known after apply) 2026-02-27 00:02:29.859249 | orchestrator | + uuid = (known after apply) 2026-02-27 00:02:29.859255 | orchestrator | } 2026-02-27 00:02:29.859261 | orchestrator | } 2026-02-27 00:02:29.859309 | orchestrator | 2026-02-27 00:02:29.859320 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-02-27 00:02:29.859324 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-02-27 00:02:29.859328 | orchestrator | + fingerprint = (known after apply) 2026-02-27 00:02:29.859332 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.859336 | orchestrator | + name = "testbed" 2026-02-27 00:02:29.859340 | orchestrator | + private_key = (sensitive value) 2026-02-27 00:02:29.859343 | orchestrator | + public_key = (known after apply) 2026-02-27 00:02:29.859347 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.859351 | orchestrator | + user_id = (known after apply) 2026-02-27 00:02:29.859355 | orchestrator | } 2026-02-27 00:02:29.859390 | orchestrator | 2026-02-27 00:02:29.859401 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-02-27 00:02:29.859405 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-27 00:02:29.859414 | orchestrator | + device = (known after apply) 2026-02-27 00:02:29.859418 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.859421 | orchestrator | + instance_id = (known after apply) 2026-02-27 00:02:29.859425 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.859429 | orchestrator | + volume_id = (known after apply) 2026-02-27 00:02:29.859433 | orchestrator | } 2026-02-27 00:02:29.859467 | orchestrator | 2026-02-27 00:02:29.859478 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-02-27 00:02:29.859482 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-27 00:02:29.859488 | orchestrator | + device = (known after apply) 2026-02-27 00:02:29.859493 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.859499 | orchestrator | + instance_id = (known after apply) 2026-02-27 00:02:29.859505 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.859511 | orchestrator | + volume_id = (known after apply) 2026-02-27 00:02:29.859517 | orchestrator | } 2026-02-27 00:02:29.859576 | orchestrator | 2026-02-27 00:02:29.859589 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-02-27 00:02:29.859593 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-27 00:02:29.859597 | orchestrator | + device = (known after apply) 2026-02-27 00:02:29.859601 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.859605 | orchestrator | + instance_id = (known after apply) 2026-02-27 00:02:29.859608 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.859612 | orchestrator | + volume_id = (known after apply) 2026-02-27 00:02:29.859616 | orchestrator | } 2026-02-27 00:02:29.859654 | orchestrator | 2026-02-27 00:02:29.859665 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-02-27 00:02:29.859669 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-27 00:02:29.859673 | orchestrator | + device = (known after apply) 2026-02-27 00:02:29.859677 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.859681 | orchestrator | + instance_id = (known after apply) 2026-02-27 00:02:29.859685 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.859689 | orchestrator | + volume_id = (known after apply) 2026-02-27 00:02:29.859692 | orchestrator | } 2026-02-27 00:02:29.859727 | orchestrator | 2026-02-27 00:02:29.859737 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-02-27 00:02:29.859742 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-27 00:02:29.859746 | orchestrator | + device = (known after apply) 2026-02-27 00:02:29.859750 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.859754 | orchestrator | + instance_id = (known after apply) 2026-02-27 00:02:29.859761 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.859765 | orchestrator | + volume_id = (known after apply) 2026-02-27 00:02:29.859769 | orchestrator | } 2026-02-27 00:02:29.859804 | orchestrator | 2026-02-27 00:02:29.859815 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-02-27 00:02:29.859819 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-27 00:02:29.859823 | orchestrator | + device = (known after apply) 2026-02-27 00:02:29.859827 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.859830 | orchestrator | + instance_id = (known after apply) 2026-02-27 00:02:29.859834 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.859838 | orchestrator | + volume_id = (known after apply) 2026-02-27 00:02:29.859841 | orchestrator | } 2026-02-27 00:02:29.859884 | orchestrator | 2026-02-27 00:02:29.859895 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-02-27 00:02:29.859900 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-27 00:02:29.859904 | orchestrator | + device = (known after apply) 2026-02-27 00:02:29.859907 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.859911 | orchestrator | + instance_id = (known after apply) 2026-02-27 00:02:29.859915 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.859923 | orchestrator | + volume_id = (known after apply) 2026-02-27 00:02:29.859927 | orchestrator | } 2026-02-27 00:02:29.859963 | orchestrator | 2026-02-27 00:02:29.859974 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-02-27 00:02:29.859978 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-27 00:02:29.859982 | orchestrator | + device = (known after apply) 2026-02-27 00:02:29.859986 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.859989 | orchestrator | + instance_id = (known after apply) 2026-02-27 00:02:29.860031 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.860036 | orchestrator | + volume_id = (known after apply) 2026-02-27 00:02:29.860039 | orchestrator | } 2026-02-27 00:02:29.860078 | orchestrator | 2026-02-27 00:02:29.860089 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-02-27 00:02:29.860093 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-27 00:02:29.860097 | orchestrator | + device = (known after apply) 2026-02-27 00:02:29.860101 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.860105 | orchestrator | + instance_id = (known after apply) 2026-02-27 00:02:29.860108 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.860112 | orchestrator | + volume_id = (known after apply) 2026-02-27 00:02:29.860116 | orchestrator | } 2026-02-27 00:02:29.860151 | orchestrator | 2026-02-27 00:02:29.860161 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-02-27 00:02:29.860167 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-02-27 00:02:29.860171 | orchestrator | + fixed_ip = (known after apply) 2026-02-27 00:02:29.860175 | orchestrator | + floating_ip = (known after apply) 2026-02-27 00:02:29.860179 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.860182 | orchestrator | + port_id = (known after apply) 2026-02-27 00:02:29.860186 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.860190 | orchestrator | } 2026-02-27 00:02:29.860248 | orchestrator | 2026-02-27 00:02:29.860259 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-02-27 00:02:29.860264 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-02-27 00:02:29.860268 | orchestrator | + address = (known after apply) 2026-02-27 00:02:29.860272 | orchestrator | + all_tags = (known after apply) 2026-02-27 00:02:29.860276 | orchestrator | + dns_domain = (known after apply) 2026-02-27 00:02:29.860279 | orchestrator | + dns_name = (known after apply) 2026-02-27 00:02:29.860283 | orchestrator | + fixed_ip = (known after apply) 2026-02-27 00:02:29.860287 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.860291 | orchestrator | + pool = "public" 2026-02-27 00:02:29.860295 | orchestrator | + port_id = (known after apply) 2026-02-27 00:02:29.860299 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.860303 | orchestrator | + subnet_id = (known after apply) 2026-02-27 00:02:29.860306 | orchestrator | + tenant_id = (known after apply) 2026-02-27 00:02:29.860310 | orchestrator | } 2026-02-27 00:02:29.860397 | orchestrator | 2026-02-27 00:02:29.860408 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-02-27 00:02:29.860412 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-02-27 00:02:29.860416 | orchestrator | + admin_state_up = (known after apply) 2026-02-27 00:02:29.860420 | orchestrator | + all_tags = (known after apply) 2026-02-27 00:02:29.860424 | orchestrator | + availability_zone_hints = [ 2026-02-27 00:02:29.860428 | orchestrator | + "nova", 2026-02-27 00:02:29.860432 | orchestrator | ] 2026-02-27 00:02:29.860435 | orchestrator | + dns_domain = (known after apply) 2026-02-27 00:02:29.860439 | orchestrator | + external = (known after apply) 2026-02-27 00:02:29.860443 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.860447 | orchestrator | + mtu = (known after apply) 2026-02-27 00:02:29.860451 | orchestrator | + name = "net-testbed-management" 2026-02-27 00:02:29.860454 | orchestrator | + port_security_enabled = (known after apply) 2026-02-27 00:02:29.860462 | orchestrator | + qos_policy_id = (known after apply) 2026-02-27 00:02:29.860467 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.860470 | orchestrator | + shared = (known after apply) 2026-02-27 00:02:29.860474 | orchestrator | + tenant_id = (known after apply) 2026-02-27 00:02:29.860478 | orchestrator | + transparent_vlan = (known after apply) 2026-02-27 00:02:29.860482 | orchestrator | 2026-02-27 00:02:29.860485 | orchestrator | + segments (known after apply) 2026-02-27 00:02:29.860489 | orchestrator | } 2026-02-27 00:02:29.860606 | orchestrator | 2026-02-27 00:02:29.860618 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-02-27 00:02:29.860622 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-02-27 00:02:29.860626 | orchestrator | + admin_state_up = (known after apply) 2026-02-27 00:02:29.860630 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-27 00:02:29.860634 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-27 00:02:29.860641 | orchestrator | + all_tags = (known after apply) 2026-02-27 00:02:29.860645 | orchestrator | + device_id = (known after apply) 2026-02-27 00:02:29.860649 | orchestrator | + device_owner = (known after apply) 2026-02-27 00:02:29.860652 | orchestrator | + dns_assignment = (known after apply) 2026-02-27 00:02:29.860656 | orchestrator | + dns_name = (known after apply) 2026-02-27 00:02:29.860660 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.860664 | orchestrator | + mac_address = (known after apply) 2026-02-27 00:02:29.860668 | orchestrator | + network_id = (known after apply) 2026-02-27 00:02:29.860671 | orchestrator | + port_security_enabled = (known after apply) 2026-02-27 00:02:29.860675 | orchestrator | + qos_policy_id = (known after apply) 2026-02-27 00:02:29.860679 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.860683 | orchestrator | + security_group_ids = (known after apply) 2026-02-27 00:02:29.860686 | orchestrator | + tenant_id = (known after apply) 2026-02-27 00:02:29.860690 | orchestrator | 2026-02-27 00:02:29.860694 | orchestrator | + allowed_address_pairs { 2026-02-27 00:02:29.860698 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-27 00:02:29.860702 | orchestrator | } 2026-02-27 00:02:29.860705 | orchestrator | 2026-02-27 00:02:29.860709 | orchestrator | + binding (known after apply) 2026-02-27 00:02:29.860713 | orchestrator | 2026-02-27 00:02:29.860717 | orchestrator | + fixed_ip { 2026-02-27 00:02:29.860721 | orchestrator | + ip_address = "192.168.16.5" 2026-02-27 00:02:29.860725 | orchestrator | + subnet_id = (known after apply) 2026-02-27 00:02:29.860729 | orchestrator | } 2026-02-27 00:02:29.860733 | orchestrator | } 2026-02-27 00:02:29.860864 | orchestrator | 2026-02-27 00:02:29.860875 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-02-27 00:02:29.860880 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-27 00:02:29.860884 | orchestrator | + admin_state_up = (known after apply) 2026-02-27 00:02:29.860888 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-27 00:02:29.860891 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-27 00:02:29.860895 | orchestrator | + all_tags = (known after apply) 2026-02-27 00:02:29.860899 | orchestrator | + device_id = (known after apply) 2026-02-27 00:02:29.860903 | orchestrator | + device_owner = (known after apply) 2026-02-27 00:02:29.860907 | orchestrator | + dns_assignment = (known after apply) 2026-02-27 00:02:29.860910 | orchestrator | + dns_name = (known after apply) 2026-02-27 00:02:29.860914 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.860918 | orchestrator | + mac_address = (known after apply) 2026-02-27 00:02:29.860922 | orchestrator | + network_id = (known after apply) 2026-02-27 00:02:29.860925 | orchestrator | + port_security_enabled = (known after apply) 2026-02-27 00:02:29.860929 | orchestrator | + qos_policy_id = (known after apply) 2026-02-27 00:02:29.860933 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.860941 | orchestrator | + security_group_ids = (known after apply) 2026-02-27 00:02:29.860944 | orchestrator | + tenant_id = (known after apply) 2026-02-27 00:02:29.860948 | orchestrator | 2026-02-27 00:02:29.860952 | orchestrator | + allowed_address_pairs { 2026-02-27 00:02:29.860956 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-27 00:02:29.860960 | orchestrator | } 2026-02-27 00:02:29.860964 | orchestrator | + allowed_address_pairs { 2026-02-27 00:02:29.860968 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-27 00:02:29.860971 | orchestrator | } 2026-02-27 00:02:29.860975 | orchestrator | + allowed_address_pairs { 2026-02-27 00:02:29.860979 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-27 00:02:29.860983 | orchestrator | } 2026-02-27 00:02:29.860987 | orchestrator | 2026-02-27 00:02:29.860991 | orchestrator | + binding (known after apply) 2026-02-27 00:02:29.861007 | orchestrator | 2026-02-27 00:02:29.861011 | orchestrator | + fixed_ip { 2026-02-27 00:02:29.861015 | orchestrator | + ip_address = "192.168.16.10" 2026-02-27 00:02:29.861019 | orchestrator | + subnet_id = (known after apply) 2026-02-27 00:02:29.861023 | orchestrator | } 2026-02-27 00:02:29.861026 | orchestrator | } 2026-02-27 00:02:29.861163 | orchestrator | 2026-02-27 00:02:29.861178 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-02-27 00:02:29.861182 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-27 00:02:29.861186 | orchestrator | + admin_state_up = (known after apply) 2026-02-27 00:02:29.861190 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-27 00:02:29.861194 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-27 00:02:29.861197 | orchestrator | + all_tags = (known after apply) 2026-02-27 00:02:29.861201 | orchestrator | + device_id = (known after apply) 2026-02-27 00:02:29.861205 | orchestrator | + device_owner = (known after apply) 2026-02-27 00:02:29.861209 | orchestrator | + dns_assignment = (known after apply) 2026-02-27 00:02:29.861213 | orchestrator | + dns_name = (known after apply) 2026-02-27 00:02:29.861217 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.861220 | orchestrator | + mac_address = (known after apply) 2026-02-27 00:02:29.861224 | orchestrator | + network_id = (known after apply) 2026-02-27 00:02:29.861228 | orchestrator | + port_security_enabled = (known after apply) 2026-02-27 00:02:29.861232 | orchestrator | + qos_policy_id = (known after apply) 2026-02-27 00:02:29.861236 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.861239 | orchestrator | + security_group_ids = (known after apply) 2026-02-27 00:02:29.861243 | orchestrator | + tenant_id = (known after apply) 2026-02-27 00:02:29.861247 | orchestrator | 2026-02-27 00:02:29.861251 | orchestrator | + allowed_address_pairs { 2026-02-27 00:02:29.861255 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-27 00:02:29.861258 | orchestrator | } 2026-02-27 00:02:29.861262 | orchestrator | + allowed_address_pairs { 2026-02-27 00:02:29.861266 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-27 00:02:29.861270 | orchestrator | } 2026-02-27 00:02:29.861274 | orchestrator | + allowed_address_pairs { 2026-02-27 00:02:29.861277 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-27 00:02:29.861281 | orchestrator | } 2026-02-27 00:02:29.861285 | orchestrator | 2026-02-27 00:02:29.861289 | orchestrator | + binding (known after apply) 2026-02-27 00:02:29.861293 | orchestrator | 2026-02-27 00:02:29.861296 | orchestrator | + fixed_ip { 2026-02-27 00:02:29.861300 | orchestrator | + ip_address = "192.168.16.11" 2026-02-27 00:02:29.861304 | orchestrator | + subnet_id = (known after apply) 2026-02-27 00:02:29.861308 | orchestrator | } 2026-02-27 00:02:29.861312 | orchestrator | } 2026-02-27 00:02:29.861499 | orchestrator | 2026-02-27 00:02:29.861515 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-02-27 00:02:29.861519 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-27 00:02:29.861523 | orchestrator | + admin_state_up = (known after apply) 2026-02-27 00:02:29.861527 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-27 00:02:29.861531 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-27 00:02:29.861535 | orchestrator | + all_tags = (known after apply) 2026-02-27 00:02:29.861546 | orchestrator | + device_id = (known after apply) 2026-02-27 00:02:29.861549 | orchestrator | + device_owner = (known after apply) 2026-02-27 00:02:29.861553 | orchestrator | + dns_assignment = (known after apply) 2026-02-27 00:02:29.861557 | orchestrator | + dns_name = (known after apply) 2026-02-27 00:02:29.861564 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.861568 | orchestrator | + mac_address = (known after apply) 2026-02-27 00:02:29.861571 | orchestrator | + network_id = (known after apply) 2026-02-27 00:02:29.861575 | orchestrator | + port_security_enabled = (known after apply) 2026-02-27 00:02:29.861579 | orchestrator | + qos_policy_id = (known after apply) 2026-02-27 00:02:29.861583 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.861586 | orchestrator | + security_group_ids = (known after apply) 2026-02-27 00:02:29.861590 | orchestrator | + tenant_id = (known after apply) 2026-02-27 00:02:29.861594 | orchestrator | 2026-02-27 00:02:29.861598 | orchestrator | + allowed_address_pairs { 2026-02-27 00:02:29.861602 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-27 00:02:29.861606 | orchestrator | } 2026-02-27 00:02:29.861610 | orchestrator | + allowed_address_pairs { 2026-02-27 00:02:29.861613 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-27 00:02:29.861617 | orchestrator | } 2026-02-27 00:02:29.861621 | orchestrator | + allowed_address_pairs { 2026-02-27 00:02:29.861625 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-27 00:02:29.861628 | orchestrator | } 2026-02-27 00:02:29.861632 | orchestrator | 2026-02-27 00:02:29.861636 | orchestrator | + binding (known after apply) 2026-02-27 00:02:29.861640 | orchestrator | 2026-02-27 00:02:29.861644 | orchestrator | + fixed_ip { 2026-02-27 00:02:29.861647 | orchestrator | + ip_address = "192.168.16.12" 2026-02-27 00:02:29.861651 | orchestrator | + subnet_id = (known after apply) 2026-02-27 00:02:29.861655 | orchestrator | } 2026-02-27 00:02:29.861659 | orchestrator | } 2026-02-27 00:02:29.861790 | orchestrator | 2026-02-27 00:02:29.861801 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-02-27 00:02:29.861806 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-27 00:02:29.861810 | orchestrator | + admin_state_up = (known after apply) 2026-02-27 00:02:29.861814 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-27 00:02:29.861817 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-27 00:02:29.861821 | orchestrator | + all_tags = (known after apply) 2026-02-27 00:02:29.861825 | orchestrator | + device_id = (known after apply) 2026-02-27 00:02:29.861829 | orchestrator | + device_owner = (known after apply) 2026-02-27 00:02:29.861833 | orchestrator | + dns_assignment = (known after apply) 2026-02-27 00:02:29.861836 | orchestrator | + dns_name = (known after apply) 2026-02-27 00:02:29.861840 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.861844 | orchestrator | + mac_address = (known after apply) 2026-02-27 00:02:29.861848 | orchestrator | + network_id = (known after apply) 2026-02-27 00:02:29.861852 | orchestrator | + port_security_enabled = (known after apply) 2026-02-27 00:02:29.861855 | orchestrator | + qos_policy_id = (known after apply) 2026-02-27 00:02:29.861859 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.861863 | orchestrator | + security_group_ids = (known after apply) 2026-02-27 00:02:29.861867 | orchestrator | + tenant_id = (known after apply) 2026-02-27 00:02:29.861871 | orchestrator | 2026-02-27 00:02:29.861875 | orchestrator | + allowed_address_pairs { 2026-02-27 00:02:29.861878 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-27 00:02:29.861882 | orchestrator | } 2026-02-27 00:02:29.861886 | orchestrator | + allowed_address_pairs { 2026-02-27 00:02:29.861890 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-27 00:02:29.861894 | orchestrator | } 2026-02-27 00:02:29.861898 | orchestrator | + allowed_address_pairs { 2026-02-27 00:02:29.861901 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-27 00:02:29.861905 | orchestrator | } 2026-02-27 00:02:29.861909 | orchestrator | 2026-02-27 00:02:29.861918 | orchestrator | + binding (known after apply) 2026-02-27 00:02:29.861922 | orchestrator | 2026-02-27 00:02:29.861926 | orchestrator | + fixed_ip { 2026-02-27 00:02:29.861930 | orchestrator | + ip_address = "192.168.16.13" 2026-02-27 00:02:29.861934 | orchestrator | + subnet_id = (known after apply) 2026-02-27 00:02:29.861938 | orchestrator | } 2026-02-27 00:02:29.861941 | orchestrator | } 2026-02-27 00:02:29.862141 | orchestrator | 2026-02-27 00:02:29.862157 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-02-27 00:02:29.862162 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-27 00:02:29.862165 | orchestrator | + admin_state_up = (known after apply) 2026-02-27 00:02:29.862169 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-27 00:02:29.862173 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-27 00:02:29.862177 | orchestrator | + all_tags = (known after apply) 2026-02-27 00:02:29.862181 | orchestrator | + device_id = (known after apply) 2026-02-27 00:02:29.862184 | orchestrator | + device_owner = (known after apply) 2026-02-27 00:02:29.862188 | orchestrator | + dns_assignment = (known after apply) 2026-02-27 00:02:29.862192 | orchestrator | + dns_name = (known after apply) 2026-02-27 00:02:29.862196 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.862200 | orchestrator | + mac_address = (known after apply) 2026-02-27 00:02:29.862203 | orchestrator | + network_id = (known after apply) 2026-02-27 00:02:29.862207 | orchestrator | + port_security_enabled = (known after apply) 2026-02-27 00:02:29.862211 | orchestrator | + qos_policy_id = (known after apply) 2026-02-27 00:02:29.862215 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.862219 | orchestrator | + security_group_ids = (known after apply) 2026-02-27 00:02:29.862222 | orchestrator | + tenant_id = (known after apply) 2026-02-27 00:02:29.862227 | orchestrator | 2026-02-27 00:02:29.862231 | orchestrator | + allowed_address_pairs { 2026-02-27 00:02:29.862235 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-27 00:02:29.862239 | orchestrator | } 2026-02-27 00:02:29.862243 | orchestrator | + allowed_address_pairs { 2026-02-27 00:02:29.862246 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-27 00:02:29.862250 | orchestrator | } 2026-02-27 00:02:29.862254 | orchestrator | + allowed_address_pairs { 2026-02-27 00:02:29.862258 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-27 00:02:29.862261 | orchestrator | } 2026-02-27 00:02:29.862265 | orchestrator | 2026-02-27 00:02:29.862269 | orchestrator | + binding (known after apply) 2026-02-27 00:02:29.862273 | orchestrator | 2026-02-27 00:02:29.862276 | orchestrator | + fixed_ip { 2026-02-27 00:02:29.862280 | orchestrator | + ip_address = "192.168.16.14" 2026-02-27 00:02:29.862284 | orchestrator | + subnet_id = (known after apply) 2026-02-27 00:02:29.862288 | orchestrator | } 2026-02-27 00:02:29.862291 | orchestrator | } 2026-02-27 00:02:29.862455 | orchestrator | 2026-02-27 00:02:29.862468 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-02-27 00:02:29.862473 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-27 00:02:29.862477 | orchestrator | + admin_state_up = (known after apply) 2026-02-27 00:02:29.862481 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-27 00:02:29.862485 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-27 00:02:29.862489 | orchestrator | + all_tags = (known after apply) 2026-02-27 00:02:29.862493 | orchestrator | + device_id = (known after apply) 2026-02-27 00:02:29.862496 | orchestrator | + device_owner = (known after apply) 2026-02-27 00:02:29.862500 | orchestrator | + dns_assignment = (known after apply) 2026-02-27 00:02:29.862504 | orchestrator | + dns_name = (known after apply) 2026-02-27 00:02:29.862508 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.862512 | orchestrator | + mac_address = (known after apply) 2026-02-27 00:02:29.862515 | orchestrator | + network_id = (known after apply) 2026-02-27 00:02:29.862519 | orchestrator | + port_security_enabled = (known after apply) 2026-02-27 00:02:29.862523 | orchestrator | + qos_policy_id = (known after apply) 2026-02-27 00:02:29.862532 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.862536 | orchestrator | + security_group_ids = (known after apply) 2026-02-27 00:02:29.862540 | orchestrator | + tenant_id = (known after apply) 2026-02-27 00:02:29.862544 | orchestrator | 2026-02-27 00:02:29.862547 | orchestrator | + allowed_address_pairs { 2026-02-27 00:02:29.862553 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-27 00:02:29.862560 | orchestrator | } 2026-02-27 00:02:29.862565 | orchestrator | + allowed_address_pairs { 2026-02-27 00:02:29.862572 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-27 00:02:29.862578 | orchestrator | } 2026-02-27 00:02:29.862584 | orchestrator | + allowed_address_pairs { 2026-02-27 00:02:29.862590 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-27 00:02:29.862596 | orchestrator | } 2026-02-27 00:02:29.862603 | orchestrator | 2026-02-27 00:02:29.862612 | orchestrator | + binding (known after apply) 2026-02-27 00:02:29.862616 | orchestrator | 2026-02-27 00:02:29.862620 | orchestrator | + fixed_ip { 2026-02-27 00:02:29.862624 | orchestrator | + ip_address = "192.168.16.15" 2026-02-27 00:02:29.862629 | orchestrator | + subnet_id = (known after apply) 2026-02-27 00:02:29.862635 | orchestrator | } 2026-02-27 00:02:29.862642 | orchestrator | } 2026-02-27 00:02:29.862701 | orchestrator | 2026-02-27 00:02:29.862713 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-02-27 00:02:29.862717 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-02-27 00:02:29.862721 | orchestrator | + force_destroy = false 2026-02-27 00:02:29.862725 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.862732 | orchestrator | + port_id = (known after apply) 2026-02-27 00:02:29.862738 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.862744 | orchestrator | + router_id = (known after apply) 2026-02-27 00:02:29.862751 | orchestrator | + subnet_id = (known after apply) 2026-02-27 00:02:29.862757 | orchestrator | } 2026-02-27 00:02:29.862849 | orchestrator | 2026-02-27 00:02:29.862862 | orchestrator | # openstack_networking_router_v2.router will be created 2026-02-27 00:02:29.862866 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-02-27 00:02:29.862870 | orchestrator | + admin_state_up = (known after apply) 2026-02-27 00:02:29.862874 | orchestrator | + all_tags = (known after apply) 2026-02-27 00:02:29.862878 | orchestrator | + availability_zone_hints = [ 2026-02-27 00:02:29.862882 | orchestrator | + "nova", 2026-02-27 00:02:29.862886 | orchestrator | ] 2026-02-27 00:02:29.862889 | orchestrator | + distributed = (known after apply) 2026-02-27 00:02:29.862893 | orchestrator | + enable_snat = (known after apply) 2026-02-27 00:02:29.862897 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-02-27 00:02:29.862901 | orchestrator | + external_qos_policy_id = (known after apply) 2026-02-27 00:02:29.862905 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.862908 | orchestrator | + name = "testbed" 2026-02-27 00:02:29.862912 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.862916 | orchestrator | + tenant_id = (known after apply) 2026-02-27 00:02:29.862920 | orchestrator | 2026-02-27 00:02:29.862924 | orchestrator | + external_fixed_ip (known after apply) 2026-02-27 00:02:29.862927 | orchestrator | } 2026-02-27 00:02:29.863041 | orchestrator | 2026-02-27 00:02:29.863054 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-02-27 00:02:29.863059 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-02-27 00:02:29.863063 | orchestrator | + description = "ssh" 2026-02-27 00:02:29.863067 | orchestrator | + direction = "ingress" 2026-02-27 00:02:29.863071 | orchestrator | + ethertype = "IPv4" 2026-02-27 00:02:29.863075 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.863079 | orchestrator | + port_range_max = 22 2026-02-27 00:02:29.863083 | orchestrator | + port_range_min = 22 2026-02-27 00:02:29.863087 | orchestrator | + protocol = "tcp" 2026-02-27 00:02:29.863091 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.863099 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-27 00:02:29.863103 | orchestrator | + remote_group_id = (known after apply) 2026-02-27 00:02:29.863107 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-27 00:02:29.863111 | orchestrator | + security_group_id = (known after apply) 2026-02-27 00:02:29.863115 | orchestrator | + tenant_id = (known after apply) 2026-02-27 00:02:29.863119 | orchestrator | } 2026-02-27 00:02:29.863197 | orchestrator | 2026-02-27 00:02:29.863208 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-02-27 00:02:29.863213 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-02-27 00:02:29.863217 | orchestrator | + description = "wireguard" 2026-02-27 00:02:29.863220 | orchestrator | + direction = "ingress" 2026-02-27 00:02:29.863224 | orchestrator | + ethertype = "IPv4" 2026-02-27 00:02:29.863228 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.863232 | orchestrator | + port_range_max = 51820 2026-02-27 00:02:29.863236 | orchestrator | + port_range_min = 51820 2026-02-27 00:02:29.863239 | orchestrator | + protocol = "udp" 2026-02-27 00:02:29.863243 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.863247 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-27 00:02:29.863251 | orchestrator | + remote_group_id = (known after apply) 2026-02-27 00:02:29.863255 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-27 00:02:29.863259 | orchestrator | + security_group_id = (known after apply) 2026-02-27 00:02:29.863262 | orchestrator | + tenant_id = (known after apply) 2026-02-27 00:02:29.863266 | orchestrator | } 2026-02-27 00:02:29.863326 | orchestrator | 2026-02-27 00:02:29.863337 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-02-27 00:02:29.863342 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-02-27 00:02:29.863346 | orchestrator | + direction = "ingress" 2026-02-27 00:02:29.863349 | orchestrator | + ethertype = "IPv4" 2026-02-27 00:02:29.863353 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.863357 | orchestrator | + protocol = "tcp" 2026-02-27 00:02:29.863361 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.863365 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-27 00:02:29.863368 | orchestrator | + remote_group_id = (known after apply) 2026-02-27 00:02:29.863372 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-02-27 00:02:29.863376 | orchestrator | + security_group_id = (known after apply) 2026-02-27 00:02:29.863380 | orchestrator | + tenant_id = (known after apply) 2026-02-27 00:02:29.863384 | orchestrator | } 2026-02-27 00:02:29.863442 | orchestrator | 2026-02-27 00:02:29.863453 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-02-27 00:02:29.863458 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-02-27 00:02:29.863461 | orchestrator | + direction = "ingress" 2026-02-27 00:02:29.863465 | orchestrator | + ethertype = "IPv4" 2026-02-27 00:02:29.863469 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.863473 | orchestrator | + protocol = "udp" 2026-02-27 00:02:29.863476 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.863480 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-27 00:02:29.863484 | orchestrator | + remote_group_id = (known after apply) 2026-02-27 00:02:29.863488 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-02-27 00:02:29.863491 | orchestrator | + security_group_id = (known after apply) 2026-02-27 00:02:29.863495 | orchestrator | + tenant_id = (known after apply) 2026-02-27 00:02:29.863499 | orchestrator | } 2026-02-27 00:02:29.863561 | orchestrator | 2026-02-27 00:02:29.863572 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-02-27 00:02:29.863580 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-02-27 00:02:29.863584 | orchestrator | + direction = "ingress" 2026-02-27 00:02:29.863588 | orchestrator | + ethertype = "IPv4" 2026-02-27 00:02:29.863592 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.863596 | orchestrator | + protocol = "icmp" 2026-02-27 00:02:29.863599 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.863603 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-27 00:02:29.863607 | orchestrator | + remote_group_id = (known after apply) 2026-02-27 00:02:29.863611 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-27 00:02:29.863615 | orchestrator | + security_group_id = (known after apply) 2026-02-27 00:02:29.863618 | orchestrator | + tenant_id = (known after apply) 2026-02-27 00:02:29.863622 | orchestrator | } 2026-02-27 00:02:29.863684 | orchestrator | 2026-02-27 00:02:29.863696 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-02-27 00:02:29.863700 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-02-27 00:02:29.863704 | orchestrator | + direction = "ingress" 2026-02-27 00:02:29.863708 | orchestrator | + ethertype = "IPv4" 2026-02-27 00:02:29.863712 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.863715 | orchestrator | + protocol = "tcp" 2026-02-27 00:02:29.863719 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.863723 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-27 00:02:29.863730 | orchestrator | + remote_group_id = (known after apply) 2026-02-27 00:02:29.863734 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-27 00:02:29.863738 | orchestrator | + security_group_id = (known after apply) 2026-02-27 00:02:29.863742 | orchestrator | + tenant_id = (known after apply) 2026-02-27 00:02:29.863745 | orchestrator | } 2026-02-27 00:02:29.863804 | orchestrator | 2026-02-27 00:02:29.863815 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-02-27 00:02:29.863819 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-02-27 00:02:29.863823 | orchestrator | + direction = "ingress" 2026-02-27 00:02:29.863827 | orchestrator | + ethertype = "IPv4" 2026-02-27 00:02:29.863831 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.863834 | orchestrator | + protocol = "udp" 2026-02-27 00:02:29.863838 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.863842 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-27 00:02:29.863846 | orchestrator | + remote_group_id = (known after apply) 2026-02-27 00:02:29.863850 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-27 00:02:29.863853 | orchestrator | + security_group_id = (known after apply) 2026-02-27 00:02:29.863857 | orchestrator | + tenant_id = (known after apply) 2026-02-27 00:02:29.863861 | orchestrator | } 2026-02-27 00:02:29.863919 | orchestrator | 2026-02-27 00:02:29.863930 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-02-27 00:02:29.863935 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-02-27 00:02:29.863939 | orchestrator | + direction = "ingress" 2026-02-27 00:02:29.863945 | orchestrator | + ethertype = "IPv4" 2026-02-27 00:02:29.863949 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.863953 | orchestrator | + protocol = "icmp" 2026-02-27 00:02:29.863956 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.863960 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-27 00:02:29.863964 | orchestrator | + remote_group_id = (known after apply) 2026-02-27 00:02:29.863968 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-27 00:02:29.863972 | orchestrator | + security_group_id = (known after apply) 2026-02-27 00:02:29.863975 | orchestrator | + tenant_id = (known after apply) 2026-02-27 00:02:29.863982 | orchestrator | } 2026-02-27 00:02:29.864060 | orchestrator | 2026-02-27 00:02:29.864071 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-02-27 00:02:29.864076 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-02-27 00:02:29.864080 | orchestrator | + description = "vrrp" 2026-02-27 00:02:29.864084 | orchestrator | + direction = "ingress" 2026-02-27 00:02:29.864087 | orchestrator | + ethertype = "IPv4" 2026-02-27 00:02:29.864091 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.864095 | orchestrator | + protocol = "112" 2026-02-27 00:02:29.864099 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.864103 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-27 00:02:29.864106 | orchestrator | + remote_group_id = (known after apply) 2026-02-27 00:02:29.864110 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-27 00:02:29.864114 | orchestrator | + security_group_id = (known after apply) 2026-02-27 00:02:29.864118 | orchestrator | + tenant_id = (known after apply) 2026-02-27 00:02:29.864122 | orchestrator | } 2026-02-27 00:02:29.864165 | orchestrator | 2026-02-27 00:02:29.864176 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-02-27 00:02:29.864181 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-02-27 00:02:29.864185 | orchestrator | + all_tags = (known after apply) 2026-02-27 00:02:29.864189 | orchestrator | + description = "management security group" 2026-02-27 00:02:29.864193 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.864196 | orchestrator | + name = "testbed-management" 2026-02-27 00:02:29.864200 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.864204 | orchestrator | + stateful = (known after apply) 2026-02-27 00:02:29.864208 | orchestrator | + tenant_id = (known after apply) 2026-02-27 00:02:29.864211 | orchestrator | } 2026-02-27 00:02:29.864257 | orchestrator | 2026-02-27 00:02:29.864268 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-02-27 00:02:29.864272 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-02-27 00:02:29.864276 | orchestrator | + all_tags = (known after apply) 2026-02-27 00:02:29.864280 | orchestrator | + description = "node security group" 2026-02-27 00:02:29.864283 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.864287 | orchestrator | + name = "testbed-node" 2026-02-27 00:02:29.864291 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.864295 | orchestrator | + stateful = (known after apply) 2026-02-27 00:02:29.864298 | orchestrator | + tenant_id = (known after apply) 2026-02-27 00:02:29.864302 | orchestrator | } 2026-02-27 00:02:29.864404 | orchestrator | 2026-02-27 00:02:29.864415 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-02-27 00:02:29.864419 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-02-27 00:02:29.864423 | orchestrator | + all_tags = (known after apply) 2026-02-27 00:02:29.864427 | orchestrator | + cidr = "192.168.16.0/20" 2026-02-27 00:02:29.864431 | orchestrator | + dns_nameservers = [ 2026-02-27 00:02:29.864435 | orchestrator | + "8.8.8.8", 2026-02-27 00:02:29.864439 | orchestrator | + "9.9.9.9", 2026-02-27 00:02:29.864443 | orchestrator | ] 2026-02-27 00:02:29.864446 | orchestrator | + enable_dhcp = true 2026-02-27 00:02:29.864450 | orchestrator | + gateway_ip = (known after apply) 2026-02-27 00:02:29.864454 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.864458 | orchestrator | + ip_version = 4 2026-02-27 00:02:29.864462 | orchestrator | + ipv6_address_mode = (known after apply) 2026-02-27 00:02:29.864465 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-02-27 00:02:29.864469 | orchestrator | + name = "subnet-testbed-management" 2026-02-27 00:02:29.864473 | orchestrator | + network_id = (known after apply) 2026-02-27 00:02:29.864477 | orchestrator | + no_gateway = false 2026-02-27 00:02:29.864481 | orchestrator | + region = (known after apply) 2026-02-27 00:02:29.864484 | orchestrator | + service_types = (known after apply) 2026-02-27 00:02:29.864492 | orchestrator | + tenant_id = (known after apply) 2026-02-27 00:02:29.864495 | orchestrator | 2026-02-27 00:02:29.864499 | orchestrator | + allocation_pool { 2026-02-27 00:02:29.864503 | orchestrator | + end = "192.168.31.250" 2026-02-27 00:02:29.864507 | orchestrator | + start = "192.168.31.200" 2026-02-27 00:02:29.864511 | orchestrator | } 2026-02-27 00:02:29.864515 | orchestrator | } 2026-02-27 00:02:29.864543 | orchestrator | 2026-02-27 00:02:29.864554 | orchestrator | # terraform_data.image will be created 2026-02-27 00:02:29.864558 | orchestrator | + resource "terraform_data" "image" { 2026-02-27 00:02:29.864562 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.864566 | orchestrator | + input = "Ubuntu 24.04" 2026-02-27 00:02:29.864570 | orchestrator | + output = (known after apply) 2026-02-27 00:02:29.864574 | orchestrator | } 2026-02-27 00:02:29.864601 | orchestrator | 2026-02-27 00:02:29.864612 | orchestrator | # terraform_data.image_node will be created 2026-02-27 00:02:29.864616 | orchestrator | + resource "terraform_data" "image_node" { 2026-02-27 00:02:29.864620 | orchestrator | + id = (known after apply) 2026-02-27 00:02:29.864624 | orchestrator | + input = "Ubuntu 24.04" 2026-02-27 00:02:29.864628 | orchestrator | + output = (known after apply) 2026-02-27 00:02:29.864631 | orchestrator | } 2026-02-27 00:02:29.864646 | orchestrator | 2026-02-27 00:02:29.864650 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-02-27 00:02:29.864661 | orchestrator | 2026-02-27 00:02:29.864666 | orchestrator | Changes to Outputs: 2026-02-27 00:02:29.864675 | orchestrator | + manager_address = (sensitive value) 2026-02-27 00:02:29.864680 | orchestrator | + private_key = (sensitive value) 2026-02-27 00:02:30.094088 | orchestrator | terraform_data.image_node: Creating... 2026-02-27 00:02:30.094154 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=1db490d9-a7d1-826c-5cfb-f4e6db7c9bc9] 2026-02-27 00:02:30.094953 | orchestrator | terraform_data.image: Creating... 2026-02-27 00:02:30.095795 | orchestrator | terraform_data.image: Creation complete after 0s [id=26dadddd-7ee7-58ae-33f8-943e3e99a915] 2026-02-27 00:02:30.129262 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-02-27 00:02:30.129344 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-02-27 00:02:30.135937 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-02-27 00:02:30.137050 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-02-27 00:02:30.150120 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-02-27 00:02:30.150169 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-02-27 00:02:30.150175 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-02-27 00:02:30.150563 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-02-27 00:02:30.151849 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-02-27 00:02:30.160083 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-02-27 00:02:30.582291 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-02-27 00:02:30.886257 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-02-27 00:02:30.886280 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-02-27 00:02:30.886286 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-02-27 00:02:30.886294 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-02-27 00:02:30.886299 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-02-27 00:02:31.133547 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=5ff05640-1de1-4f77-a415-219151b542cd] 2026-02-27 00:02:31.157344 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-02-27 00:02:33.752829 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=a71caac6-92e2-45f9-9373-56e68f91355d] 2026-02-27 00:02:33.759460 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-02-27 00:02:33.767652 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=7eee5dc0-08e1-454c-92c3-6b2c2994eeca] 2026-02-27 00:02:33.772857 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-02-27 00:02:33.804961 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=109976ce-0a0b-48dc-bf94-df447195f5f3] 2026-02-27 00:02:33.811633 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-02-27 00:02:33.829300 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=c4916fb9-2e52-4262-9b09-55f9a233c222] 2026-02-27 00:02:33.832269 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-02-27 00:02:33.885666 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=e3da6966-e430-4abd-922c-0deb6c0107da] 2026-02-27 00:02:33.889059 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=684e370a-eec5-4526-b882-46c5ae49497d] 2026-02-27 00:02:33.890793 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-02-27 00:02:33.896453 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-02-27 00:02:33.908471 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=94dd7bd0-cf74-4f65-8a31-220357cecc47] 2026-02-27 00:02:33.916571 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=7c486bab-939d-4b28-a8a9-5aea680a535b] 2026-02-27 00:02:33.925359 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-02-27 00:02:33.930877 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=31dfd5e5-18cf-471e-b1c7-8ca54ae9145c] 2026-02-27 00:02:33.932431 | orchestrator | local_file.id_rsa_pub: Creating... 2026-02-27 00:02:33.938234 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=75f944263088b0c49debbeecb9d0814c0af8301b] 2026-02-27 00:02:33.942684 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-02-27 00:02:33.944060 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=6debd2b005083e5656b60fac7c7a8f595edf1a2e] 2026-02-27 00:02:34.535789 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=d07f98ad-3d62-49f5-84e9-af5adb521297] 2026-02-27 00:02:35.166896 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=ed2f6b94-c7e2-4507-b78e-c177ec0751f7] 2026-02-27 00:02:35.172523 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-02-27 00:02:37.264744 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=7b66f543-9fce-4c0f-ad03-37f043f64686] 2026-02-27 00:02:37.299839 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=3470a12e-124f-400f-8df7-ef48fe544e4b] 2026-02-27 00:02:37.358485 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=26592820-9606-46fa-9763-c5d42d9ec173] 2026-02-27 00:02:37.395576 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=4935a670-85d5-4728-bfd3-2cafc3ce60ad] 2026-02-27 00:02:37.412510 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=472bec66-48f8-4240-93c3-59b48e4ed72f] 2026-02-27 00:02:37.422979 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=63f8a2f7-2c5c-47d8-abf0-9ea9e5c30cf9] 2026-02-27 00:02:38.348877 | orchestrator | openstack_networking_router_v2.router: Creation complete after 3s [id=8b0f1d9c-dee6-4b37-acef-f4768e3bd015] 2026-02-27 00:02:38.355373 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-02-27 00:02:38.356464 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-02-27 00:02:38.357318 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-02-27 00:02:38.545204 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=80c57f52-25b5-4e3f-a75f-d7c58c4395d7] 2026-02-27 00:02:38.561600 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-02-27 00:02:38.562442 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-02-27 00:02:38.566434 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-02-27 00:02:38.567036 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-02-27 00:02:38.568103 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-02-27 00:02:38.568979 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-02-27 00:02:38.571672 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-02-27 00:02:38.577207 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-02-27 00:02:38.683068 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=4471ffe7-39fc-4ab4-831a-4fd7eab4d7ea] 2026-02-27 00:02:38.693654 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-02-27 00:02:38.909499 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=53cb7bf2-1379-4ef7-9ebe-14a96b8ffab2] 2026-02-27 00:02:38.921921 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-02-27 00:02:39.238652 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 0s [id=703ed74c-51a4-46ee-9eba-4d965a938100] 2026-02-27 00:02:39.245520 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-02-27 00:02:39.415451 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 0s [id=a5da95bd-f1cb-457c-9cc6-d633f0bed679] 2026-02-27 00:02:39.421808 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-02-27 00:02:39.439073 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=f9e38bc8-a224-4358-aded-fbe1500f5d34] 2026-02-27 00:02:39.445461 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-02-27 00:02:39.462897 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 0s [id=658474f4-776e-4695-95df-2b4a6fb60356] 2026-02-27 00:02:39.469378 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-02-27 00:02:39.475182 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=a1d1d88d-01dd-4a9f-a688-eb762328e22d] 2026-02-27 00:02:39.480448 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-02-27 00:02:39.521742 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=234a0441-5ee1-4ef8-9fd3-2de988ba4aca] 2026-02-27 00:02:39.532024 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-02-27 00:02:39.869916 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=fb6dc571-de7d-479f-8a99-62bb4f4bfeea] 2026-02-27 00:02:39.933352 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=b6334531-a7a8-437c-86aa-a20e9f978403] 2026-02-27 00:02:39.965468 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=dfb7f94c-e269-431a-93e6-dabe4b0c6e5b] 2026-02-27 00:02:40.147197 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=00516ac1-29e3-4bc2-8adc-102032c23d3b] 2026-02-27 00:02:40.303696 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 0s [id=b24ab7b1-1e6f-488c-95d8-8a5e46bc9a2d] 2026-02-27 00:02:40.352741 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=dc55729a-d56f-4e76-a870-906824568678] 2026-02-27 00:02:40.696607 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 2s [id=40d107c6-ffd3-4c2d-85e1-fc58b89b07ae] 2026-02-27 00:02:40.758530 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 2s [id=a05290a9-4e7f-413c-aa15-b10b1f6599da] 2026-02-27 00:02:40.910796 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 2s [id=19182b48-99ce-43b5-b611-743e9969c643] 2026-02-27 00:02:44.717797 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 7s [id=09a8341a-b3a7-41ab-aebe-9c5d65457f76] 2026-02-27 00:02:44.737210 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-02-27 00:02:44.753281 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-02-27 00:02:44.753593 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-02-27 00:02:44.759853 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-02-27 00:02:44.760072 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-02-27 00:02:44.761537 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-02-27 00:02:44.768137 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-02-27 00:02:47.505305 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 3s [id=4b8032d6-c452-47ec-8090-95bc6277f664] 2026-02-27 00:02:47.512073 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-02-27 00:02:47.521150 | orchestrator | local_file.inventory: Creating... 2026-02-27 00:02:47.522190 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-02-27 00:02:47.528944 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=8cfb389f20aba008a19e5357d2477d8735a0dce7] 2026-02-27 00:02:47.529138 | orchestrator | local_file.inventory: Creation complete after 0s [id=ba2fc49f21d31d552d94dc84a05b24f7aa48b3d7] 2026-02-27 00:02:49.026938 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=4b8032d6-c452-47ec-8090-95bc6277f664] 2026-02-27 00:02:54.756034 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-02-27 00:02:54.756299 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-02-27 00:02:54.765800 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-02-27 00:02:54.766903 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-02-27 00:02:54.766963 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-02-27 00:02:54.772278 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-02-27 00:03:04.765353 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-02-27 00:03:04.765502 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-02-27 00:03:04.766523 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-02-27 00:03:04.767768 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-02-27 00:03:04.767843 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-02-27 00:03:04.773156 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-02-27 00:03:14.774897 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-02-27 00:03:14.775056 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-02-27 00:03:14.775074 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-02-27 00:03:14.775086 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-02-27 00:03:14.775097 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-02-27 00:03:14.775112 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-02-27 00:03:24.784725 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2026-02-27 00:03:24.784830 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2026-02-27 00:03:24.784854 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2026-02-27 00:03:24.784864 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-02-27 00:03:24.784874 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2026-02-27 00:03:24.784883 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-02-27 00:03:25.615826 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 41s [id=2a9a5441-440e-4b70-885e-6347b82a2ef4] 2026-02-27 00:03:25.664647 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 41s [id=39fb5742-e7dc-4a4e-bf7b-83dbd563ac21] 2026-02-27 00:03:25.668672 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 41s [id=b91a9a33-d6ef-40b5-974d-561d6bbbb538] 2026-02-27 00:03:34.793152 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [50s elapsed] 2026-02-27 00:03:34.793219 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [50s elapsed] 2026-02-27 00:03:34.793232 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [50s elapsed] 2026-02-27 00:03:35.485107 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 50s [id=3cd91c13-7386-472b-bead-384e4464e498] 2026-02-27 00:03:35.729699 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 51s [id=5625b3a4-2efe-4367-8b51-89a92a98534b] 2026-02-27 00:03:35.930075 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 51s [id=bb12aa83-04d2-47cc-9059-100f5cb34e0f] 2026-02-27 00:03:35.973564 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-02-27 00:03:35.981334 | orchestrator | null_resource.node_semaphore: Creating... 2026-02-27 00:03:35.981703 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-02-27 00:03:35.990377 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-02-27 00:03:35.997581 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=4911311298726288672] 2026-02-27 00:03:36.003363 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-02-27 00:03:36.003406 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-02-27 00:03:36.003411 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-02-27 00:03:36.003420 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-02-27 00:03:36.003611 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-02-27 00:03:36.035781 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-02-27 00:03:36.059265 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-02-27 00:03:39.342695 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 3s [id=bb12aa83-04d2-47cc-9059-100f5cb34e0f/94dd7bd0-cf74-4f65-8a31-220357cecc47] 2026-02-27 00:03:39.345684 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 3s [id=39fb5742-e7dc-4a4e-bf7b-83dbd563ac21/7c486bab-939d-4b28-a8a9-5aea680a535b] 2026-02-27 00:03:39.391574 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=2a9a5441-440e-4b70-885e-6347b82a2ef4/109976ce-0a0b-48dc-bf94-df447195f5f3] 2026-02-27 00:03:39.432158 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=2a9a5441-440e-4b70-885e-6347b82a2ef4/684e370a-eec5-4526-b882-46c5ae49497d] 2026-02-27 00:03:39.448443 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=bb12aa83-04d2-47cc-9059-100f5cb34e0f/e3da6966-e430-4abd-922c-0deb6c0107da] 2026-02-27 00:03:39.483873 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 3s [id=39fb5742-e7dc-4a4e-bf7b-83dbd563ac21/31dfd5e5-18cf-471e-b1c7-8ca54ae9145c] 2026-02-27 00:03:45.522833 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 10s [id=2a9a5441-440e-4b70-885e-6347b82a2ef4/7eee5dc0-08e1-454c-92c3-6b2c2994eeca] 2026-02-27 00:03:45.537939 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 10s [id=39fb5742-e7dc-4a4e-bf7b-83dbd563ac21/c4916fb9-2e52-4262-9b09-55f9a233c222] 2026-02-27 00:03:45.562117 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 10s [id=bb12aa83-04d2-47cc-9059-100f5cb34e0f/a71caac6-92e2-45f9-9373-56e68f91355d] 2026-02-27 00:03:46.063346 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-02-27 00:03:56.064732 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-02-27 00:03:56.434112 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=a244ee44-3fab-43e4-b520-ca964d725434] 2026-02-27 00:03:58.615860 | orchestrator | 2026-02-27 00:03:58.615956 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-02-27 00:03:58.615981 | orchestrator | 2026-02-27 00:03:58.615986 | orchestrator | Outputs: 2026-02-27 00:03:58.615991 | orchestrator | 2026-02-27 00:03:58.615995 | orchestrator | manager_address = 2026-02-27 00:03:58.615999 | orchestrator | private_key = 2026-02-27 00:03:58.966722 | orchestrator | ok: Runtime: 0:01:33.414540 2026-02-27 00:03:59.003467 | 2026-02-27 00:03:59.003605 | TASK [Fetch manager address] 2026-02-27 00:03:59.504787 | orchestrator | ok 2026-02-27 00:03:59.516918 | 2026-02-27 00:03:59.517081 | TASK [Set manager_host address] 2026-02-27 00:03:59.617799 | orchestrator | ok 2026-02-27 00:03:59.626548 | 2026-02-27 00:03:59.626666 | LOOP [Update ansible collections] 2026-02-27 00:04:03.250338 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-27 00:04:03.250739 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-02-27 00:04:03.250817 | orchestrator | Starting galaxy collection install process 2026-02-27 00:04:03.250895 | orchestrator | Process install dependency map 2026-02-27 00:04:03.250942 | orchestrator | Starting collection install process 2026-02-27 00:04:03.251034 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons' 2026-02-27 00:04:03.251084 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons 2026-02-27 00:04:03.251150 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-02-27 00:04:03.251252 | orchestrator | ok: Item: commons Runtime: 0:00:03.298259 2026-02-27 00:04:04.590709 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-27 00:04:04.590863 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-02-27 00:04:04.590897 | orchestrator | Starting galaxy collection install process 2026-02-27 00:04:04.590922 | orchestrator | Process install dependency map 2026-02-27 00:04:04.590943 | orchestrator | Starting collection install process 2026-02-27 00:04:04.590978 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services' 2026-02-27 00:04:04.591000 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services 2026-02-27 00:04:04.591021 | orchestrator | osism.services:999.0.0 was installed successfully 2026-02-27 00:04:04.591057 | orchestrator | ok: Item: services Runtime: 0:00:01.092648 2026-02-27 00:04:04.609249 | 2026-02-27 00:04:04.609454 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-02-27 00:04:15.168355 | orchestrator | ok 2026-02-27 00:04:15.179510 | 2026-02-27 00:04:15.179639 | TASK [Wait a little longer for the manager so that everything is ready] 2026-02-27 00:05:15.230906 | orchestrator | ok 2026-02-27 00:05:15.238626 | 2026-02-27 00:05:15.238746 | TASK [Fetch manager ssh hostkey] 2026-02-27 00:05:16.824956 | orchestrator | Output suppressed because no_log was given 2026-02-27 00:05:16.841593 | 2026-02-27 00:05:16.841771 | TASK [Get ssh keypair from terraform environment] 2026-02-27 00:05:17.379801 | orchestrator | ok: Runtime: 0:00:00.010554 2026-02-27 00:05:17.409281 | 2026-02-27 00:05:17.409518 | TASK [Point out that the following task takes some time and does not give any output] 2026-02-27 00:05:17.463780 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-02-27 00:05:17.475462 | 2026-02-27 00:05:17.475621 | TASK [Run manager part 0] 2026-02-27 00:05:18.497007 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-27 00:05:18.555927 | orchestrator | 2026-02-27 00:05:18.556007 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-02-27 00:05:18.556021 | orchestrator | 2026-02-27 00:05:18.556047 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-02-27 00:05:20.660645 | orchestrator | ok: [testbed-manager] 2026-02-27 00:05:20.660711 | orchestrator | 2026-02-27 00:05:20.660739 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-02-27 00:05:20.660750 | orchestrator | 2026-02-27 00:05:20.660761 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-27 00:05:22.674722 | orchestrator | ok: [testbed-manager] 2026-02-27 00:05:22.674771 | orchestrator | 2026-02-27 00:05:22.674778 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-02-27 00:05:23.386129 | orchestrator | ok: [testbed-manager] 2026-02-27 00:05:23.386265 | orchestrator | 2026-02-27 00:05:23.386278 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-02-27 00:05:23.454613 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:05:23.454671 | orchestrator | 2026-02-27 00:05:23.454681 | orchestrator | TASK [Update package cache] **************************************************** 2026-02-27 00:05:23.504983 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:05:23.505037 | orchestrator | 2026-02-27 00:05:23.505046 | orchestrator | TASK [Install required packages] *********************************************** 2026-02-27 00:05:23.554156 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:05:23.554212 | orchestrator | 2026-02-27 00:05:23.554218 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-02-27 00:05:23.593817 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:05:23.593906 | orchestrator | 2026-02-27 00:05:23.593918 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-02-27 00:05:23.624240 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:05:23.624299 | orchestrator | 2026-02-27 00:05:23.624308 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-02-27 00:05:23.664587 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:05:23.664658 | orchestrator | 2026-02-27 00:05:23.664672 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-02-27 00:05:23.701839 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:05:23.701936 | orchestrator | 2026-02-27 00:05:23.701948 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-02-27 00:05:24.548477 | orchestrator | changed: [testbed-manager] 2026-02-27 00:05:24.548541 | orchestrator | 2026-02-27 00:05:24.548548 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-02-27 00:08:19.427155 | orchestrator | changed: [testbed-manager] 2026-02-27 00:08:19.427242 | orchestrator | 2026-02-27 00:08:19.427269 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-02-27 00:10:08.232053 | orchestrator | changed: [testbed-manager] 2026-02-27 00:10:08.232094 | orchestrator | 2026-02-27 00:10:08.232103 | orchestrator | TASK [Install required packages] *********************************************** 2026-02-27 00:10:29.809289 | orchestrator | changed: [testbed-manager] 2026-02-27 00:10:29.809380 | orchestrator | 2026-02-27 00:10:29.810101 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-02-27 00:10:39.971299 | orchestrator | changed: [testbed-manager] 2026-02-27 00:10:39.971386 | orchestrator | 2026-02-27 00:10:39.971403 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-02-27 00:10:40.024950 | orchestrator | ok: [testbed-manager] 2026-02-27 00:10:40.025011 | orchestrator | 2026-02-27 00:10:40.025022 | orchestrator | TASK [Get current user] ******************************************************** 2026-02-27 00:10:40.859463 | orchestrator | ok: [testbed-manager] 2026-02-27 00:10:40.859499 | orchestrator | 2026-02-27 00:10:40.859506 | orchestrator | TASK [Create venv directory] *************************************************** 2026-02-27 00:10:41.611349 | orchestrator | changed: [testbed-manager] 2026-02-27 00:10:41.611441 | orchestrator | 2026-02-27 00:10:41.611459 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-02-27 00:10:47.867596 | orchestrator | changed: [testbed-manager] 2026-02-27 00:10:47.867660 | orchestrator | 2026-02-27 00:10:47.867695 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-02-27 00:10:53.976096 | orchestrator | changed: [testbed-manager] 2026-02-27 00:10:53.976186 | orchestrator | 2026-02-27 00:10:53.976206 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-02-27 00:10:56.700910 | orchestrator | changed: [testbed-manager] 2026-02-27 00:10:56.700954 | orchestrator | 2026-02-27 00:10:56.700963 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-02-27 00:10:58.502609 | orchestrator | changed: [testbed-manager] 2026-02-27 00:10:58.502698 | orchestrator | 2026-02-27 00:10:58.502715 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-02-27 00:10:59.671392 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-02-27 00:10:59.671487 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-02-27 00:10:59.671502 | orchestrator | 2026-02-27 00:10:59.671515 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-02-27 00:10:59.717197 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-02-27 00:10:59.717251 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-02-27 00:10:59.717260 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-02-27 00:10:59.717268 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-02-27 00:11:04.841007 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-02-27 00:11:04.841208 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-02-27 00:11:04.841215 | orchestrator | 2026-02-27 00:11:04.841220 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-02-27 00:11:05.434097 | orchestrator | changed: [testbed-manager] 2026-02-27 00:11:05.434144 | orchestrator | 2026-02-27 00:11:05.434291 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-02-27 00:13:27.351305 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-02-27 00:13:27.351388 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-02-27 00:13:27.351431 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-02-27 00:13:27.351441 | orchestrator | 2026-02-27 00:13:27.351452 | orchestrator | TASK [Install local collections] *********************************************** 2026-02-27 00:13:29.924278 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-02-27 00:13:29.924354 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-02-27 00:13:29.924368 | orchestrator | 2026-02-27 00:13:29.924380 | orchestrator | PLAY [Create operator user] **************************************************** 2026-02-27 00:13:29.924429 | orchestrator | 2026-02-27 00:13:29.924441 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-27 00:13:31.386563 | orchestrator | ok: [testbed-manager] 2026-02-27 00:13:31.386662 | orchestrator | 2026-02-27 00:13:31.386682 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-02-27 00:13:31.434929 | orchestrator | ok: [testbed-manager] 2026-02-27 00:13:31.435027 | orchestrator | 2026-02-27 00:13:31.435046 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-02-27 00:13:31.507127 | orchestrator | ok: [testbed-manager] 2026-02-27 00:13:31.507168 | orchestrator | 2026-02-27 00:13:31.507177 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-02-27 00:13:32.331251 | orchestrator | changed: [testbed-manager] 2026-02-27 00:13:32.331290 | orchestrator | 2026-02-27 00:13:32.331297 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-02-27 00:13:34.116594 | orchestrator | changed: [testbed-manager] 2026-02-27 00:13:34.116636 | orchestrator | 2026-02-27 00:13:34.116644 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-02-27 00:13:35.549024 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-02-27 00:13:35.549062 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-02-27 00:13:35.549069 | orchestrator | 2026-02-27 00:13:35.549084 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-02-27 00:13:36.955203 | orchestrator | changed: [testbed-manager] 2026-02-27 00:13:36.955372 | orchestrator | 2026-02-27 00:13:36.955414 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-02-27 00:13:38.652270 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-02-27 00:13:38.652333 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-02-27 00:13:38.652346 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-02-27 00:13:38.652358 | orchestrator | 2026-02-27 00:13:38.652371 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-02-27 00:13:38.699962 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:13:38.700045 | orchestrator | 2026-02-27 00:13:38.700059 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-02-27 00:13:38.775924 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:13:38.775981 | orchestrator | 2026-02-27 00:13:38.775988 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-02-27 00:13:39.364633 | orchestrator | changed: [testbed-manager] 2026-02-27 00:13:39.364673 | orchestrator | 2026-02-27 00:13:39.364680 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-02-27 00:13:39.431309 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:13:39.431355 | orchestrator | 2026-02-27 00:13:39.431364 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-02-27 00:13:40.291244 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-27 00:13:40.292000 | orchestrator | changed: [testbed-manager] 2026-02-27 00:13:40.292014 | orchestrator | 2026-02-27 00:13:40.292023 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-02-27 00:13:40.320952 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:13:40.320986 | orchestrator | 2026-02-27 00:13:40.320992 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-02-27 00:13:40.354989 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:13:40.355022 | orchestrator | 2026-02-27 00:13:40.355029 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-02-27 00:13:40.384007 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:13:40.384064 | orchestrator | 2026-02-27 00:13:40.384080 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-02-27 00:13:40.457843 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:13:40.457878 | orchestrator | 2026-02-27 00:13:40.457884 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-02-27 00:13:41.209734 | orchestrator | ok: [testbed-manager] 2026-02-27 00:13:41.209772 | orchestrator | 2026-02-27 00:13:41.209779 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-02-27 00:13:41.209785 | orchestrator | 2026-02-27 00:13:41.209789 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-27 00:13:42.649234 | orchestrator | ok: [testbed-manager] 2026-02-27 00:13:42.649282 | orchestrator | 2026-02-27 00:13:42.649289 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-02-27 00:13:43.627276 | orchestrator | changed: [testbed-manager] 2026-02-27 00:13:43.627313 | orchestrator | 2026-02-27 00:13:43.627319 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 00:13:43.627324 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-02-27 00:13:43.627338 | orchestrator | 2026-02-27 00:13:43.817305 | orchestrator | ok: Runtime: 0:08:25.915218 2026-02-27 00:13:43.836356 | 2026-02-27 00:13:43.836499 | TASK [Point out that the log in on the manager is now possible] 2026-02-27 00:13:43.887817 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-02-27 00:13:43.897659 | 2026-02-27 00:13:43.897958 | TASK [Point out that the following task takes some time and does not give any output] 2026-02-27 00:13:43.944900 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-02-27 00:13:43.954188 | 2026-02-27 00:13:43.954313 | TASK [Run manager part 1 + 2] 2026-02-27 00:13:44.809806 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-27 00:13:44.865453 | orchestrator | 2026-02-27 00:13:44.865500 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-02-27 00:13:44.865506 | orchestrator | 2026-02-27 00:13:44.865519 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-27 00:13:47.982186 | orchestrator | ok: [testbed-manager] 2026-02-27 00:13:47.982333 | orchestrator | 2026-02-27 00:13:47.982430 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-02-27 00:13:48.019179 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:13:48.019250 | orchestrator | 2026-02-27 00:13:48.019268 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-02-27 00:13:48.064787 | orchestrator | ok: [testbed-manager] 2026-02-27 00:13:48.064872 | orchestrator | 2026-02-27 00:13:48.064890 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-27 00:13:48.117882 | orchestrator | ok: [testbed-manager] 2026-02-27 00:13:48.117966 | orchestrator | 2026-02-27 00:13:48.117986 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-27 00:13:48.187487 | orchestrator | ok: [testbed-manager] 2026-02-27 00:13:48.187578 | orchestrator | 2026-02-27 00:13:48.187596 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-27 00:13:48.250760 | orchestrator | ok: [testbed-manager] 2026-02-27 00:13:48.250834 | orchestrator | 2026-02-27 00:13:48.250849 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-27 00:13:48.300743 | orchestrator | included: /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-02-27 00:13:48.300832 | orchestrator | 2026-02-27 00:13:48.300854 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-27 00:13:49.057030 | orchestrator | ok: [testbed-manager] 2026-02-27 00:13:49.057119 | orchestrator | 2026-02-27 00:13:49.057138 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-27 00:13:49.100996 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:13:49.101083 | orchestrator | 2026-02-27 00:13:49.101100 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-27 00:13:50.520496 | orchestrator | changed: [testbed-manager] 2026-02-27 00:13:50.520712 | orchestrator | 2026-02-27 00:13:50.520726 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-27 00:13:51.135077 | orchestrator | ok: [testbed-manager] 2026-02-27 00:13:51.135136 | orchestrator | 2026-02-27 00:13:51.135146 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-27 00:13:52.296641 | orchestrator | changed: [testbed-manager] 2026-02-27 00:13:52.296702 | orchestrator | 2026-02-27 00:13:52.296712 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-27 00:14:08.570294 | orchestrator | changed: [testbed-manager] 2026-02-27 00:14:08.570412 | orchestrator | 2026-02-27 00:14:08.570431 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-02-27 00:14:09.238052 | orchestrator | ok: [testbed-manager] 2026-02-27 00:14:09.238182 | orchestrator | 2026-02-27 00:14:09.238202 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-02-27 00:14:09.295192 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:14:09.295281 | orchestrator | 2026-02-27 00:14:09.295297 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-02-27 00:14:10.317052 | orchestrator | changed: [testbed-manager] 2026-02-27 00:14:10.317146 | orchestrator | 2026-02-27 00:14:10.317162 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-02-27 00:14:11.329265 | orchestrator | changed: [testbed-manager] 2026-02-27 00:14:11.329392 | orchestrator | 2026-02-27 00:14:11.329412 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-02-27 00:14:11.929698 | orchestrator | changed: [testbed-manager] 2026-02-27 00:14:11.929780 | orchestrator | 2026-02-27 00:14:11.929795 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-02-27 00:14:11.971714 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-02-27 00:14:11.971780 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-02-27 00:14:11.971787 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-02-27 00:14:11.971792 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-02-27 00:14:14.315167 | orchestrator | changed: [testbed-manager] 2026-02-27 00:14:14.315274 | orchestrator | 2026-02-27 00:14:14.315292 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-02-27 00:14:23.444697 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-02-27 00:14:23.444793 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-02-27 00:14:23.444817 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-02-27 00:14:23.444833 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-02-27 00:14:23.444859 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-02-27 00:14:23.444869 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-02-27 00:14:23.444879 | orchestrator | 2026-02-27 00:14:23.444888 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-02-27 00:14:24.517235 | orchestrator | changed: [testbed-manager] 2026-02-27 00:14:24.517304 | orchestrator | 2026-02-27 00:14:24.517314 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-02-27 00:14:24.561269 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:14:24.561334 | orchestrator | 2026-02-27 00:14:24.561365 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-02-27 00:14:27.878764 | orchestrator | changed: [testbed-manager] 2026-02-27 00:14:27.878848 | orchestrator | 2026-02-27 00:14:27.878864 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-02-27 00:14:27.926803 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:14:27.926902 | orchestrator | 2026-02-27 00:14:27.926918 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-02-27 00:16:16.579138 | orchestrator | changed: [testbed-manager] 2026-02-27 00:16:16.579176 | orchestrator | 2026-02-27 00:16:16.579184 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-27 00:16:17.805995 | orchestrator | ok: [testbed-manager] 2026-02-27 00:16:17.806121 | orchestrator | 2026-02-27 00:16:17.806149 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 00:16:17.806172 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-02-27 00:16:17.806192 | orchestrator | 2026-02-27 00:16:18.096342 | orchestrator | ok: Runtime: 0:02:33.637363 2026-02-27 00:16:18.116101 | 2026-02-27 00:16:18.116279 | TASK [Reboot manager] 2026-02-27 00:16:19.655370 | orchestrator | ok: Runtime: 0:00:00.995303 2026-02-27 00:16:19.677016 | 2026-02-27 00:16:19.677273 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-02-27 00:16:36.094164 | orchestrator | ok 2026-02-27 00:16:36.103866 | 2026-02-27 00:16:36.103986 | TASK [Wait a little longer for the manager so that everything is ready] 2026-02-27 00:17:36.148255 | orchestrator | ok 2026-02-27 00:17:36.157743 | 2026-02-27 00:17:36.157862 | TASK [Deploy manager + bootstrap nodes] 2026-02-27 00:17:38.677121 | orchestrator | 2026-02-27 00:17:38.677394 | orchestrator | # DEPLOY MANAGER 2026-02-27 00:17:38.677437 | orchestrator | 2026-02-27 00:17:38.677463 | orchestrator | + set -e 2026-02-27 00:17:38.677487 | orchestrator | + echo 2026-02-27 00:17:38.677511 | orchestrator | + echo '# DEPLOY MANAGER' 2026-02-27 00:17:38.677541 | orchestrator | + echo 2026-02-27 00:17:38.677608 | orchestrator | + cat /opt/manager-vars.sh 2026-02-27 00:17:38.679725 | orchestrator | export NUMBER_OF_NODES=6 2026-02-27 00:17:38.679775 | orchestrator | 2026-02-27 00:17:38.679788 | orchestrator | export CEPH_VERSION=reef 2026-02-27 00:17:38.679801 | orchestrator | export CONFIGURATION_VERSION=main 2026-02-27 00:17:38.679814 | orchestrator | export MANAGER_VERSION=9.5.0 2026-02-27 00:17:38.679839 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-02-27 00:17:38.679850 | orchestrator | 2026-02-27 00:17:38.679868 | orchestrator | export ARA=false 2026-02-27 00:17:38.679880 | orchestrator | export DEPLOY_MODE=manager 2026-02-27 00:17:38.679898 | orchestrator | export TEMPEST=true 2026-02-27 00:17:38.679910 | orchestrator | export IS_ZUUL=true 2026-02-27 00:17:38.679921 | orchestrator | 2026-02-27 00:17:38.679938 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.197 2026-02-27 00:17:38.679950 | orchestrator | export EXTERNAL_API=false 2026-02-27 00:17:38.679961 | orchestrator | 2026-02-27 00:17:38.679972 | orchestrator | export IMAGE_USER=ubuntu 2026-02-27 00:17:38.679985 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-02-27 00:17:38.679996 | orchestrator | 2026-02-27 00:17:38.680007 | orchestrator | export CEPH_STACK=ceph-ansible 2026-02-27 00:17:38.680026 | orchestrator | 2026-02-27 00:17:38.680037 | orchestrator | + echo 2026-02-27 00:17:38.680050 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-27 00:17:38.681034 | orchestrator | ++ export INTERACTIVE=false 2026-02-27 00:17:38.681058 | orchestrator | ++ INTERACTIVE=false 2026-02-27 00:17:38.681072 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-27 00:17:38.681084 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-27 00:17:38.681221 | orchestrator | + source /opt/manager-vars.sh 2026-02-27 00:17:38.681238 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-27 00:17:38.681249 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-27 00:17:38.681260 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-27 00:17:38.681271 | orchestrator | ++ CEPH_VERSION=reef 2026-02-27 00:17:38.681282 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-27 00:17:38.681293 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-27 00:17:38.681304 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-27 00:17:38.681315 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-27 00:17:38.681326 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-27 00:17:38.681347 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-27 00:17:38.681359 | orchestrator | ++ export ARA=false 2026-02-27 00:17:38.681370 | orchestrator | ++ ARA=false 2026-02-27 00:17:38.681381 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-27 00:17:38.681392 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-27 00:17:38.681402 | orchestrator | ++ export TEMPEST=true 2026-02-27 00:17:38.681413 | orchestrator | ++ TEMPEST=true 2026-02-27 00:17:38.681424 | orchestrator | ++ export IS_ZUUL=true 2026-02-27 00:17:38.681435 | orchestrator | ++ IS_ZUUL=true 2026-02-27 00:17:38.681446 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.197 2026-02-27 00:17:38.681457 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.197 2026-02-27 00:17:38.681468 | orchestrator | ++ export EXTERNAL_API=false 2026-02-27 00:17:38.681479 | orchestrator | ++ EXTERNAL_API=false 2026-02-27 00:17:38.681490 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-27 00:17:38.681501 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-27 00:17:38.681516 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-27 00:17:38.681528 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-27 00:17:38.681539 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-27 00:17:38.681549 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-27 00:17:38.681560 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-02-27 00:17:38.734224 | orchestrator | + docker version 2026-02-27 00:17:38.838846 | orchestrator | Client: Docker Engine - Community 2026-02-27 00:17:38.838938 | orchestrator | Version: 27.5.1 2026-02-27 00:17:38.838952 | orchestrator | API version: 1.47 2026-02-27 00:17:38.838964 | orchestrator | Go version: go1.22.11 2026-02-27 00:17:38.838974 | orchestrator | Git commit: 9f9e405 2026-02-27 00:17:38.838984 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-02-27 00:17:38.838995 | orchestrator | OS/Arch: linux/amd64 2026-02-27 00:17:38.839005 | orchestrator | Context: default 2026-02-27 00:17:38.839015 | orchestrator | 2026-02-27 00:17:38.839025 | orchestrator | Server: Docker Engine - Community 2026-02-27 00:17:38.839035 | orchestrator | Engine: 2026-02-27 00:17:38.839045 | orchestrator | Version: 27.5.1 2026-02-27 00:17:38.839056 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-02-27 00:17:38.839092 | orchestrator | Go version: go1.22.11 2026-02-27 00:17:38.839103 | orchestrator | Git commit: 4c9b3b0 2026-02-27 00:17:38.839112 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-02-27 00:17:38.839122 | orchestrator | OS/Arch: linux/amd64 2026-02-27 00:17:38.839132 | orchestrator | Experimental: false 2026-02-27 00:17:38.839141 | orchestrator | containerd: 2026-02-27 00:17:38.839151 | orchestrator | Version: v2.2.1 2026-02-27 00:17:38.839205 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-02-27 00:17:38.839216 | orchestrator | runc: 2026-02-27 00:17:38.839226 | orchestrator | Version: 1.3.4 2026-02-27 00:17:38.839236 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-02-27 00:17:38.839245 | orchestrator | docker-init: 2026-02-27 00:17:38.839255 | orchestrator | Version: 0.19.0 2026-02-27 00:17:38.839266 | orchestrator | GitCommit: de40ad0 2026-02-27 00:17:38.842659 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-02-27 00:17:38.853045 | orchestrator | + set -e 2026-02-27 00:17:38.853988 | orchestrator | + source /opt/manager-vars.sh 2026-02-27 00:17:38.854070 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-27 00:17:38.854087 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-27 00:17:38.854101 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-27 00:17:38.854113 | orchestrator | ++ CEPH_VERSION=reef 2026-02-27 00:17:38.854127 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-27 00:17:38.854141 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-27 00:17:38.854154 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-27 00:17:38.854183 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-27 00:17:38.854195 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-27 00:17:38.854206 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-27 00:17:38.854217 | orchestrator | ++ export ARA=false 2026-02-27 00:17:38.854228 | orchestrator | ++ ARA=false 2026-02-27 00:17:38.854239 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-27 00:17:38.854250 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-27 00:17:38.854261 | orchestrator | ++ export TEMPEST=true 2026-02-27 00:17:38.854272 | orchestrator | ++ TEMPEST=true 2026-02-27 00:17:38.854283 | orchestrator | ++ export IS_ZUUL=true 2026-02-27 00:17:38.854294 | orchestrator | ++ IS_ZUUL=true 2026-02-27 00:17:38.854305 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.197 2026-02-27 00:17:38.854316 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.197 2026-02-27 00:17:38.854327 | orchestrator | ++ export EXTERNAL_API=false 2026-02-27 00:17:38.854338 | orchestrator | ++ EXTERNAL_API=false 2026-02-27 00:17:38.854349 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-27 00:17:38.854360 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-27 00:17:38.854371 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-27 00:17:38.854381 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-27 00:17:38.854393 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-27 00:17:38.854404 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-27 00:17:38.854415 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-27 00:17:38.854426 | orchestrator | ++ export INTERACTIVE=false 2026-02-27 00:17:38.854436 | orchestrator | ++ INTERACTIVE=false 2026-02-27 00:17:38.854447 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-27 00:17:38.854463 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-27 00:17:38.854474 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-27 00:17:38.854485 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.5.0 2026-02-27 00:17:38.858477 | orchestrator | + set -e 2026-02-27 00:17:38.858553 | orchestrator | + VERSION=9.5.0 2026-02-27 00:17:38.858568 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.5.0/g' /opt/configuration/environments/manager/configuration.yml 2026-02-27 00:17:38.865871 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-27 00:17:38.865917 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-27 00:17:38.869556 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-27 00:17:38.873391 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-02-27 00:17:38.881627 | orchestrator | /opt/configuration ~ 2026-02-27 00:17:38.881695 | orchestrator | + set -e 2026-02-27 00:17:38.881710 | orchestrator | + pushd /opt/configuration 2026-02-27 00:17:38.881732 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-27 00:17:38.883277 | orchestrator | + source /opt/venv/bin/activate 2026-02-27 00:17:38.884446 | orchestrator | ++ deactivate nondestructive 2026-02-27 00:17:38.884475 | orchestrator | ++ '[' -n '' ']' 2026-02-27 00:17:38.884493 | orchestrator | ++ '[' -n '' ']' 2026-02-27 00:17:38.884536 | orchestrator | ++ hash -r 2026-02-27 00:17:38.884552 | orchestrator | ++ '[' -n '' ']' 2026-02-27 00:17:38.884567 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-27 00:17:38.884582 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-27 00:17:38.884597 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-27 00:17:38.884619 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-27 00:17:38.884635 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-27 00:17:38.884650 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-27 00:17:38.884665 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-27 00:17:38.884680 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-27 00:17:38.884696 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-27 00:17:38.884711 | orchestrator | ++ export PATH 2026-02-27 00:17:38.884731 | orchestrator | ++ '[' -n '' ']' 2026-02-27 00:17:38.884745 | orchestrator | ++ '[' -z '' ']' 2026-02-27 00:17:38.884764 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-27 00:17:38.884780 | orchestrator | ++ PS1='(venv) ' 2026-02-27 00:17:38.884794 | orchestrator | ++ export PS1 2026-02-27 00:17:38.884809 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-27 00:17:38.884823 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-27 00:17:38.884842 | orchestrator | ++ hash -r 2026-02-27 00:17:38.884858 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-02-27 00:17:40.152707 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-02-27 00:17:40.153042 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-02-27 00:17:40.154422 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-02-27 00:17:40.155797 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-02-27 00:17:40.156987 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-02-27 00:17:40.167046 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-02-27 00:17:40.168515 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-02-27 00:17:40.169393 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-02-27 00:17:40.170525 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-02-27 00:17:40.205230 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-02-27 00:17:40.206376 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-02-27 00:17:40.208242 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-02-27 00:17:40.209617 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.2.25) 2026-02-27 00:17:40.213793 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-02-27 00:17:40.438119 | orchestrator | ++ which gilt 2026-02-27 00:17:40.441868 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-02-27 00:17:40.441924 | orchestrator | + /opt/venv/bin/gilt overlay 2026-02-27 00:17:40.697476 | orchestrator | osism.cfg-generics: 2026-02-27 00:17:40.854212 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-02-27 00:17:40.854318 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-02-27 00:17:40.854815 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-02-27 00:17:40.854880 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-02-27 00:17:41.786626 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-02-27 00:17:41.795826 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-02-27 00:17:42.250511 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-02-27 00:17:42.317055 | orchestrator | ~ 2026-02-27 00:17:42.317153 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-27 00:17:42.317186 | orchestrator | + deactivate 2026-02-27 00:17:42.317198 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-27 00:17:42.317208 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-27 00:17:42.317217 | orchestrator | + export PATH 2026-02-27 00:17:42.317226 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-27 00:17:42.317235 | orchestrator | + '[' -n '' ']' 2026-02-27 00:17:42.317244 | orchestrator | + hash -r 2026-02-27 00:17:42.317252 | orchestrator | + '[' -n '' ']' 2026-02-27 00:17:42.317259 | orchestrator | + unset VIRTUAL_ENV 2026-02-27 00:17:42.317267 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-27 00:17:42.317275 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-27 00:17:42.317282 | orchestrator | + unset -f deactivate 2026-02-27 00:17:42.317289 | orchestrator | + popd 2026-02-27 00:17:42.318636 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-27 00:17:42.318655 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-02-27 00:17:42.319823 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-27 00:17:42.395910 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-27 00:17:42.396016 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-02-27 00:17:42.397284 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-27 00:17:42.462313 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-27 00:17:42.463138 | orchestrator | ++ semver 2024.2 2025.1 2026-02-27 00:17:42.524912 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-27 00:17:42.525010 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-02-27 00:17:42.629257 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-27 00:17:42.629389 | orchestrator | + source /opt/venv/bin/activate 2026-02-27 00:17:42.629408 | orchestrator | ++ deactivate nondestructive 2026-02-27 00:17:42.629422 | orchestrator | ++ '[' -n '' ']' 2026-02-27 00:17:42.629434 | orchestrator | ++ '[' -n '' ']' 2026-02-27 00:17:42.629462 | orchestrator | ++ hash -r 2026-02-27 00:17:42.629484 | orchestrator | ++ '[' -n '' ']' 2026-02-27 00:17:42.629496 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-27 00:17:42.629534 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-27 00:17:42.629547 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-27 00:17:42.629749 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-27 00:17:42.629768 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-27 00:17:42.629780 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-27 00:17:42.629791 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-27 00:17:42.629905 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-27 00:17:42.629962 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-27 00:17:42.629991 | orchestrator | ++ export PATH 2026-02-27 00:17:42.630172 | orchestrator | ++ '[' -n '' ']' 2026-02-27 00:17:42.630262 | orchestrator | ++ '[' -z '' ']' 2026-02-27 00:17:42.630283 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-27 00:17:42.630582 | orchestrator | ++ PS1='(venv) ' 2026-02-27 00:17:42.630696 | orchestrator | ++ export PS1 2026-02-27 00:17:42.630714 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-27 00:17:42.630729 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-27 00:17:42.630968 | orchestrator | ++ hash -r 2026-02-27 00:17:42.630988 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-02-27 00:17:43.954569 | orchestrator | 2026-02-27 00:17:43.954693 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-02-27 00:17:43.954709 | orchestrator | 2026-02-27 00:17:43.954718 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-27 00:17:44.568390 | orchestrator | ok: [testbed-manager] 2026-02-27 00:17:44.568495 | orchestrator | 2026-02-27 00:17:44.568514 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-02-27 00:17:45.659529 | orchestrator | changed: [testbed-manager] 2026-02-27 00:17:45.659659 | orchestrator | 2026-02-27 00:17:45.659689 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-02-27 00:17:45.659739 | orchestrator | 2026-02-27 00:17:45.659752 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-27 00:17:48.041460 | orchestrator | ok: [testbed-manager] 2026-02-27 00:17:48.041545 | orchestrator | 2026-02-27 00:17:48.041559 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-02-27 00:17:48.093142 | orchestrator | ok: [testbed-manager] 2026-02-27 00:17:48.093257 | orchestrator | 2026-02-27 00:17:48.093272 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-02-27 00:17:48.558085 | orchestrator | changed: [testbed-manager] 2026-02-27 00:17:48.558216 | orchestrator | 2026-02-27 00:17:48.558235 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-02-27 00:17:48.598457 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:17:48.598544 | orchestrator | 2026-02-27 00:17:48.598567 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-02-27 00:17:48.910550 | orchestrator | changed: [testbed-manager] 2026-02-27 00:17:48.910633 | orchestrator | 2026-02-27 00:17:48.910649 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-02-27 00:17:49.229537 | orchestrator | ok: [testbed-manager] 2026-02-27 00:17:49.229626 | orchestrator | 2026-02-27 00:17:49.229641 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-02-27 00:17:49.355641 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:17:49.355741 | orchestrator | 2026-02-27 00:17:49.355766 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-02-27 00:17:49.355787 | orchestrator | 2026-02-27 00:17:49.355807 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-27 00:17:51.052614 | orchestrator | ok: [testbed-manager] 2026-02-27 00:17:51.052701 | orchestrator | 2026-02-27 00:17:51.052718 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-02-27 00:17:51.140909 | orchestrator | included: osism.services.traefik for testbed-manager 2026-02-27 00:17:51.140986 | orchestrator | 2026-02-27 00:17:51.141001 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-02-27 00:17:51.194540 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-02-27 00:17:51.194632 | orchestrator | 2026-02-27 00:17:51.194653 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-02-27 00:17:52.268430 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-02-27 00:17:52.268547 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-02-27 00:17:52.268564 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-02-27 00:17:52.268576 | orchestrator | 2026-02-27 00:17:52.268591 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-02-27 00:17:54.013768 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-02-27 00:17:54.013859 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-02-27 00:17:54.013874 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-02-27 00:17:54.013887 | orchestrator | 2026-02-27 00:17:54.013899 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-02-27 00:17:54.617227 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-27 00:17:54.617369 | orchestrator | changed: [testbed-manager] 2026-02-27 00:17:54.617406 | orchestrator | 2026-02-27 00:17:54.618221 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-02-27 00:17:55.247046 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-27 00:17:55.247167 | orchestrator | changed: [testbed-manager] 2026-02-27 00:17:55.247181 | orchestrator | 2026-02-27 00:17:55.247190 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-02-27 00:17:55.305483 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:17:55.305587 | orchestrator | 2026-02-27 00:17:55.305606 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-02-27 00:17:55.682874 | orchestrator | ok: [testbed-manager] 2026-02-27 00:17:55.682973 | orchestrator | 2026-02-27 00:17:55.682989 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-02-27 00:17:55.773861 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-02-27 00:17:55.773989 | orchestrator | 2026-02-27 00:17:55.774008 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-02-27 00:17:56.958349 | orchestrator | changed: [testbed-manager] 2026-02-27 00:17:56.958464 | orchestrator | 2026-02-27 00:17:56.958490 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-02-27 00:17:57.866247 | orchestrator | changed: [testbed-manager] 2026-02-27 00:17:57.866343 | orchestrator | 2026-02-27 00:17:57.866357 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-02-27 00:18:08.592221 | orchestrator | changed: [testbed-manager] 2026-02-27 00:18:08.592315 | orchestrator | 2026-02-27 00:18:08.592328 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-02-27 00:18:08.661369 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:18:08.661460 | orchestrator | 2026-02-27 00:18:08.661498 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-02-27 00:18:08.661511 | orchestrator | 2026-02-27 00:18:08.661523 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-27 00:18:10.561093 | orchestrator | ok: [testbed-manager] 2026-02-27 00:18:10.561188 | orchestrator | 2026-02-27 00:18:10.561195 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-02-27 00:18:10.708031 | orchestrator | included: osism.services.manager for testbed-manager 2026-02-27 00:18:10.708116 | orchestrator | 2026-02-27 00:18:10.708128 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-02-27 00:18:10.770334 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-02-27 00:18:10.770416 | orchestrator | 2026-02-27 00:18:10.770427 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-02-27 00:18:13.472263 | orchestrator | ok: [testbed-manager] 2026-02-27 00:18:13.472389 | orchestrator | 2026-02-27 00:18:13.472418 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-02-27 00:18:13.530943 | orchestrator | ok: [testbed-manager] 2026-02-27 00:18:13.531029 | orchestrator | 2026-02-27 00:18:13.531042 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-02-27 00:18:13.675893 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-02-27 00:18:13.676025 | orchestrator | 2026-02-27 00:18:13.676053 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-02-27 00:18:16.614290 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-02-27 00:18:16.614394 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-02-27 00:18:16.614410 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-02-27 00:18:16.614423 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-02-27 00:18:16.614434 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-02-27 00:18:16.614448 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-02-27 00:18:16.614459 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-02-27 00:18:16.614470 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-02-27 00:18:16.614481 | orchestrator | 2026-02-27 00:18:16.614493 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-02-27 00:18:17.271862 | orchestrator | changed: [testbed-manager] 2026-02-27 00:18:17.271981 | orchestrator | 2026-02-27 00:18:17.272001 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-02-27 00:18:17.947364 | orchestrator | changed: [testbed-manager] 2026-02-27 00:18:17.947463 | orchestrator | 2026-02-27 00:18:17.947478 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-02-27 00:18:18.022539 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-02-27 00:18:18.022638 | orchestrator | 2026-02-27 00:18:18.022655 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-02-27 00:18:19.276271 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-02-27 00:18:19.276381 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-02-27 00:18:19.276397 | orchestrator | 2026-02-27 00:18:19.276410 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-02-27 00:18:19.921157 | orchestrator | changed: [testbed-manager] 2026-02-27 00:18:19.921262 | orchestrator | 2026-02-27 00:18:19.921279 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-02-27 00:18:19.984626 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:18:19.984725 | orchestrator | 2026-02-27 00:18:19.984741 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-02-27 00:18:20.070551 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-02-27 00:18:20.070644 | orchestrator | 2026-02-27 00:18:20.070660 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-02-27 00:18:20.727525 | orchestrator | changed: [testbed-manager] 2026-02-27 00:18:20.727627 | orchestrator | 2026-02-27 00:18:20.727644 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-02-27 00:18:20.800457 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-02-27 00:18:20.800552 | orchestrator | 2026-02-27 00:18:20.800568 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-02-27 00:18:22.233318 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-27 00:18:22.233415 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-27 00:18:22.233431 | orchestrator | changed: [testbed-manager] 2026-02-27 00:18:22.233444 | orchestrator | 2026-02-27 00:18:22.233456 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-02-27 00:18:22.872035 | orchestrator | changed: [testbed-manager] 2026-02-27 00:18:22.872116 | orchestrator | 2026-02-27 00:18:22.872164 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-02-27 00:18:22.932680 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:18:22.932745 | orchestrator | 2026-02-27 00:18:22.932751 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-02-27 00:18:23.028247 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-02-27 00:18:23.028316 | orchestrator | 2026-02-27 00:18:23.028324 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-02-27 00:18:23.575189 | orchestrator | changed: [testbed-manager] 2026-02-27 00:18:23.575306 | orchestrator | 2026-02-27 00:18:23.575322 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-02-27 00:18:23.981670 | orchestrator | changed: [testbed-manager] 2026-02-27 00:18:23.981792 | orchestrator | 2026-02-27 00:18:23.981822 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-02-27 00:18:25.285649 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-02-27 00:18:25.285786 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-02-27 00:18:25.285815 | orchestrator | 2026-02-27 00:18:25.285836 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-02-27 00:18:25.938728 | orchestrator | changed: [testbed-manager] 2026-02-27 00:18:25.938825 | orchestrator | 2026-02-27 00:18:25.938842 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-02-27 00:18:26.335570 | orchestrator | ok: [testbed-manager] 2026-02-27 00:18:26.335688 | orchestrator | 2026-02-27 00:18:26.335713 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-02-27 00:18:26.736268 | orchestrator | changed: [testbed-manager] 2026-02-27 00:18:26.736361 | orchestrator | 2026-02-27 00:18:26.736376 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-02-27 00:18:26.792696 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:18:26.792792 | orchestrator | 2026-02-27 00:18:26.792807 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-02-27 00:18:26.879186 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-02-27 00:18:26.879316 | orchestrator | 2026-02-27 00:18:26.879334 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-02-27 00:18:26.934585 | orchestrator | ok: [testbed-manager] 2026-02-27 00:18:26.934687 | orchestrator | 2026-02-27 00:18:26.934703 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-02-27 00:18:29.034315 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-02-27 00:18:29.034435 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-02-27 00:18:29.034462 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-02-27 00:18:29.034481 | orchestrator | 2026-02-27 00:18:29.034502 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-02-27 00:18:29.794221 | orchestrator | changed: [testbed-manager] 2026-02-27 00:18:29.794300 | orchestrator | 2026-02-27 00:18:29.794309 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-02-27 00:18:30.565987 | orchestrator | changed: [testbed-manager] 2026-02-27 00:18:30.566211 | orchestrator | 2026-02-27 00:18:30.566242 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-02-27 00:18:31.232937 | orchestrator | changed: [testbed-manager] 2026-02-27 00:18:31.233023 | orchestrator | 2026-02-27 00:18:31.233039 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-02-27 00:18:31.308382 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-02-27 00:18:31.308466 | orchestrator | 2026-02-27 00:18:31.308484 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-02-27 00:18:31.355914 | orchestrator | ok: [testbed-manager] 2026-02-27 00:18:31.355958 | orchestrator | 2026-02-27 00:18:31.355971 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-02-27 00:18:32.035600 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-02-27 00:18:32.035684 | orchestrator | 2026-02-27 00:18:32.035701 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-02-27 00:18:32.127298 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-02-27 00:18:32.127372 | orchestrator | 2026-02-27 00:18:32.127386 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-02-27 00:18:32.800490 | orchestrator | changed: [testbed-manager] 2026-02-27 00:18:32.800572 | orchestrator | 2026-02-27 00:18:32.800587 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-02-27 00:18:33.356611 | orchestrator | ok: [testbed-manager] 2026-02-27 00:18:33.356714 | orchestrator | 2026-02-27 00:18:33.356740 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-02-27 00:18:33.409279 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:18:33.409355 | orchestrator | 2026-02-27 00:18:33.409370 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-02-27 00:18:33.479829 | orchestrator | ok: [testbed-manager] 2026-02-27 00:18:33.479912 | orchestrator | 2026-02-27 00:18:33.479928 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-02-27 00:18:34.259269 | orchestrator | changed: [testbed-manager] 2026-02-27 00:18:34.259322 | orchestrator | 2026-02-27 00:18:34.259332 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-02-27 00:19:47.551216 | orchestrator | changed: [testbed-manager] 2026-02-27 00:19:47.551329 | orchestrator | 2026-02-27 00:19:47.551346 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-02-27 00:19:48.569359 | orchestrator | ok: [testbed-manager] 2026-02-27 00:19:48.569484 | orchestrator | 2026-02-27 00:19:48.569509 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-02-27 00:19:48.623328 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:19:48.623435 | orchestrator | 2026-02-27 00:19:48.623452 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-02-27 00:19:51.270747 | orchestrator | changed: [testbed-manager] 2026-02-27 00:19:51.270839 | orchestrator | 2026-02-27 00:19:51.270850 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-02-27 00:19:51.337837 | orchestrator | ok: [testbed-manager] 2026-02-27 00:19:51.337933 | orchestrator | 2026-02-27 00:19:51.337948 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-27 00:19:51.337959 | orchestrator | 2026-02-27 00:19:51.337976 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-02-27 00:19:51.509820 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:19:51.509890 | orchestrator | 2026-02-27 00:19:51.509895 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-02-27 00:20:51.568767 | orchestrator | Pausing for 60 seconds 2026-02-27 00:20:51.568846 | orchestrator | changed: [testbed-manager] 2026-02-27 00:20:51.568853 | orchestrator | 2026-02-27 00:20:51.568858 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-02-27 00:20:54.733516 | orchestrator | changed: [testbed-manager] 2026-02-27 00:20:54.733608 | orchestrator | 2026-02-27 00:20:54.733622 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-02-27 00:21:56.874694 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-02-27 00:21:56.874842 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-02-27 00:21:56.874895 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-02-27 00:21:56.874918 | orchestrator | changed: [testbed-manager] 2026-02-27 00:21:56.874940 | orchestrator | 2026-02-27 00:21:56.874958 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-02-27 00:22:07.525805 | orchestrator | changed: [testbed-manager] 2026-02-27 00:22:07.525903 | orchestrator | 2026-02-27 00:22:07.525917 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-02-27 00:22:07.616751 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-02-27 00:22:07.616851 | orchestrator | 2026-02-27 00:22:07.616865 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-27 00:22:07.616877 | orchestrator | 2026-02-27 00:22:07.616888 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-02-27 00:22:07.673984 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:22:07.674192 | orchestrator | 2026-02-27 00:22:07.674216 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-02-27 00:22:07.758453 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-02-27 00:22:07.758526 | orchestrator | 2026-02-27 00:22:07.758534 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-02-27 00:22:08.562350 | orchestrator | changed: [testbed-manager] 2026-02-27 00:22:08.562452 | orchestrator | 2026-02-27 00:22:08.562469 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-02-27 00:22:12.102000 | orchestrator | ok: [testbed-manager] 2026-02-27 00:22:12.102253 | orchestrator | 2026-02-27 00:22:12.102284 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-02-27 00:22:12.178981 | orchestrator | ok: [testbed-manager] => { 2026-02-27 00:22:12.179072 | orchestrator | "version_check_result.stdout_lines": [ 2026-02-27 00:22:12.179088 | orchestrator | "=== OSISM Container Version Check ===", 2026-02-27 00:22:12.179099 | orchestrator | "Checking running containers against expected versions...", 2026-02-27 00:22:12.179134 | orchestrator | "", 2026-02-27 00:22:12.179146 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-02-27 00:22:12.179156 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-02-27 00:22:12.179167 | orchestrator | " Enabled: true", 2026-02-27 00:22:12.179177 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-02-27 00:22:12.179187 | orchestrator | " Status: ✅ MATCH", 2026-02-27 00:22:12.179197 | orchestrator | "", 2026-02-27 00:22:12.179207 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-02-27 00:22:12.179241 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-02-27 00:22:12.179251 | orchestrator | " Enabled: true", 2026-02-27 00:22:12.179261 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-02-27 00:22:12.179271 | orchestrator | " Status: ✅ MATCH", 2026-02-27 00:22:12.179281 | orchestrator | "", 2026-02-27 00:22:12.179291 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-02-27 00:22:12.179301 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-02-27 00:22:12.179310 | orchestrator | " Enabled: true", 2026-02-27 00:22:12.179320 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-02-27 00:22:12.179330 | orchestrator | " Status: ✅ MATCH", 2026-02-27 00:22:12.179339 | orchestrator | "", 2026-02-27 00:22:12.179349 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-02-27 00:22:12.179359 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-02-27 00:22:12.179368 | orchestrator | " Enabled: true", 2026-02-27 00:22:12.179378 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-02-27 00:22:12.179387 | orchestrator | " Status: ✅ MATCH", 2026-02-27 00:22:12.179397 | orchestrator | "", 2026-02-27 00:22:12.179409 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-02-27 00:22:12.179419 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-02-27 00:22:12.179434 | orchestrator | " Enabled: true", 2026-02-27 00:22:12.179451 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-02-27 00:22:12.179474 | orchestrator | " Status: ✅ MATCH", 2026-02-27 00:22:12.179492 | orchestrator | "", 2026-02-27 00:22:12.179508 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-02-27 00:22:12.179524 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-27 00:22:12.179542 | orchestrator | " Enabled: true", 2026-02-27 00:22:12.179557 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-27 00:22:12.179575 | orchestrator | " Status: ✅ MATCH", 2026-02-27 00:22:12.179591 | orchestrator | "", 2026-02-27 00:22:12.179608 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-02-27 00:22:12.179625 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-27 00:22:12.179642 | orchestrator | " Enabled: true", 2026-02-27 00:22:12.179660 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-27 00:22:12.179679 | orchestrator | " Status: ✅ MATCH", 2026-02-27 00:22:12.179697 | orchestrator | "", 2026-02-27 00:22:12.179709 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-02-27 00:22:12.179721 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-27 00:22:12.179731 | orchestrator | " Enabled: true", 2026-02-27 00:22:12.179742 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-27 00:22:12.179753 | orchestrator | " Status: ✅ MATCH", 2026-02-27 00:22:12.179765 | orchestrator | "", 2026-02-27 00:22:12.179776 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-02-27 00:22:12.179788 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-02-27 00:22:12.179799 | orchestrator | " Enabled: true", 2026-02-27 00:22:12.179810 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-02-27 00:22:12.179821 | orchestrator | " Status: ✅ MATCH", 2026-02-27 00:22:12.179831 | orchestrator | "", 2026-02-27 00:22:12.179843 | orchestrator | "Checking service: redis (Redis Cache)", 2026-02-27 00:22:12.179854 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-27 00:22:12.179864 | orchestrator | " Enabled: true", 2026-02-27 00:22:12.179875 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-27 00:22:12.179886 | orchestrator | " Status: ✅ MATCH", 2026-02-27 00:22:12.179898 | orchestrator | "", 2026-02-27 00:22:12.179907 | orchestrator | "Checking service: api (OSISM API Service)", 2026-02-27 00:22:12.179928 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-27 00:22:12.179938 | orchestrator | " Enabled: true", 2026-02-27 00:22:12.179948 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-27 00:22:12.179957 | orchestrator | " Status: ✅ MATCH", 2026-02-27 00:22:12.179967 | orchestrator | "", 2026-02-27 00:22:12.179976 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-02-27 00:22:12.179986 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-27 00:22:12.179995 | orchestrator | " Enabled: true", 2026-02-27 00:22:12.180005 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-27 00:22:12.180015 | orchestrator | " Status: ✅ MATCH", 2026-02-27 00:22:12.180024 | orchestrator | "", 2026-02-27 00:22:12.180034 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-02-27 00:22:12.180044 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-27 00:22:12.180053 | orchestrator | " Enabled: true", 2026-02-27 00:22:12.180064 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-27 00:22:12.180073 | orchestrator | " Status: ✅ MATCH", 2026-02-27 00:22:12.180083 | orchestrator | "", 2026-02-27 00:22:12.180093 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-02-27 00:22:12.180105 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-27 00:22:12.180146 | orchestrator | " Enabled: true", 2026-02-27 00:22:12.180163 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-27 00:22:12.180201 | orchestrator | " Status: ✅ MATCH", 2026-02-27 00:22:12.180217 | orchestrator | "", 2026-02-27 00:22:12.180232 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-02-27 00:22:12.180248 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-27 00:22:12.180277 | orchestrator | " Enabled: true", 2026-02-27 00:22:12.180293 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-27 00:22:12.180308 | orchestrator | " Status: ✅ MATCH", 2026-02-27 00:22:12.180324 | orchestrator | "", 2026-02-27 00:22:12.180341 | orchestrator | "=== Summary ===", 2026-02-27 00:22:12.180356 | orchestrator | "Errors (version mismatches): 0", 2026-02-27 00:22:12.180372 | orchestrator | "Warnings (expected containers not running): 0", 2026-02-27 00:22:12.180387 | orchestrator | "", 2026-02-27 00:22:12.180403 | orchestrator | "✅ All running containers match expected versions!" 2026-02-27 00:22:12.180418 | orchestrator | ] 2026-02-27 00:22:12.180433 | orchestrator | } 2026-02-27 00:22:12.180449 | orchestrator | 2026-02-27 00:22:12.180465 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-02-27 00:22:12.230795 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:22:12.230882 | orchestrator | 2026-02-27 00:22:12.230895 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 00:22:12.230907 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-02-27 00:22:12.230919 | orchestrator | 2026-02-27 00:22:12.345357 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-27 00:22:12.345462 | orchestrator | + deactivate 2026-02-27 00:22:12.345481 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-27 00:22:12.345495 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-27 00:22:12.345506 | orchestrator | + export PATH 2026-02-27 00:22:12.345518 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-27 00:22:12.345529 | orchestrator | + '[' -n '' ']' 2026-02-27 00:22:12.345541 | orchestrator | + hash -r 2026-02-27 00:22:12.345551 | orchestrator | + '[' -n '' ']' 2026-02-27 00:22:12.345563 | orchestrator | + unset VIRTUAL_ENV 2026-02-27 00:22:12.345573 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-27 00:22:12.345585 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-27 00:22:12.345596 | orchestrator | + unset -f deactivate 2026-02-27 00:22:12.345607 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-02-27 00:22:12.351866 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-27 00:22:12.351905 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-02-27 00:22:12.351946 | orchestrator | + local max_attempts=60 2026-02-27 00:22:12.351966 | orchestrator | + local name=ceph-ansible 2026-02-27 00:22:12.351985 | orchestrator | + local attempt_num=1 2026-02-27 00:22:12.353205 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-27 00:22:12.390819 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-27 00:22:12.391005 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-27 00:22:12.391032 | orchestrator | + local max_attempts=60 2026-02-27 00:22:12.391046 | orchestrator | + local name=kolla-ansible 2026-02-27 00:22:12.391057 | orchestrator | + local attempt_num=1 2026-02-27 00:22:12.391222 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-27 00:22:12.418350 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-27 00:22:12.418426 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-27 00:22:12.418440 | orchestrator | + local max_attempts=60 2026-02-27 00:22:12.418452 | orchestrator | + local name=osism-ansible 2026-02-27 00:22:12.418464 | orchestrator | + local attempt_num=1 2026-02-27 00:22:12.418963 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-27 00:22:12.456781 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-27 00:22:12.456867 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-27 00:22:12.456881 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-02-27 00:22:13.167967 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-02-27 00:22:13.352751 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-02-27 00:22:13.352852 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-02-27 00:22:13.352869 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-02-27 00:22:13.352883 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-02-27 00:22:13.352897 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-02-27 00:22:13.352930 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-02-27 00:22:13.352942 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-02-27 00:22:13.352953 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-02-27 00:22:13.352964 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-02-27 00:22:13.352976 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-02-27 00:22:13.352987 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-02-27 00:22:13.352998 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-02-27 00:22:13.353009 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-02-27 00:22:13.353044 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-02-27 00:22:13.353057 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-02-27 00:22:13.353069 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-02-27 00:22:13.358521 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-27 00:22:13.413616 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-27 00:22:13.413698 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-02-27 00:22:13.418570 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-02-27 00:22:25.754643 | orchestrator | 2026-02-27 00:22:25 | INFO  | Task cff99a0a-a574-4288-ae23-ba2cdac8cf0e (resolvconf) was prepared for execution. 2026-02-27 00:22:25.754752 | orchestrator | 2026-02-27 00:22:25 | INFO  | It takes a moment until task cff99a0a-a574-4288-ae23-ba2cdac8cf0e (resolvconf) has been started and output is visible here. 2026-02-27 00:22:40.419747 | orchestrator | 2026-02-27 00:22:40.419848 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-02-27 00:22:40.419863 | orchestrator | 2026-02-27 00:22:40.419875 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-27 00:22:40.419887 | orchestrator | Friday 27 February 2026 00:22:29 +0000 (0:00:00.147) 0:00:00.147 ******* 2026-02-27 00:22:40.419899 | orchestrator | ok: [testbed-manager] 2026-02-27 00:22:40.419912 | orchestrator | 2026-02-27 00:22:40.419924 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-02-27 00:22:40.419936 | orchestrator | Friday 27 February 2026 00:22:33 +0000 (0:00:04.059) 0:00:04.206 ******* 2026-02-27 00:22:40.419947 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:22:40.419959 | orchestrator | 2026-02-27 00:22:40.419970 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-02-27 00:22:40.419981 | orchestrator | Friday 27 February 2026 00:22:34 +0000 (0:00:00.070) 0:00:04.276 ******* 2026-02-27 00:22:40.419992 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-02-27 00:22:40.420005 | orchestrator | 2026-02-27 00:22:40.420015 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-02-27 00:22:40.420026 | orchestrator | Friday 27 February 2026 00:22:34 +0000 (0:00:00.089) 0:00:04.366 ******* 2026-02-27 00:22:40.420048 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-02-27 00:22:40.420059 | orchestrator | 2026-02-27 00:22:40.420071 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-02-27 00:22:40.420082 | orchestrator | Friday 27 February 2026 00:22:34 +0000 (0:00:00.076) 0:00:04.442 ******* 2026-02-27 00:22:40.420093 | orchestrator | ok: [testbed-manager] 2026-02-27 00:22:40.420104 | orchestrator | 2026-02-27 00:22:40.420115 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-02-27 00:22:40.420126 | orchestrator | Friday 27 February 2026 00:22:35 +0000 (0:00:01.214) 0:00:05.657 ******* 2026-02-27 00:22:40.420167 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:22:40.420189 | orchestrator | 2026-02-27 00:22:40.420209 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-02-27 00:22:40.420228 | orchestrator | Friday 27 February 2026 00:22:35 +0000 (0:00:00.064) 0:00:05.721 ******* 2026-02-27 00:22:40.420267 | orchestrator | ok: [testbed-manager] 2026-02-27 00:22:40.420279 | orchestrator | 2026-02-27 00:22:40.420290 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-02-27 00:22:40.420300 | orchestrator | Friday 27 February 2026 00:22:36 +0000 (0:00:00.552) 0:00:06.274 ******* 2026-02-27 00:22:40.420311 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:22:40.420322 | orchestrator | 2026-02-27 00:22:40.420332 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-02-27 00:22:40.420344 | orchestrator | Friday 27 February 2026 00:22:36 +0000 (0:00:00.077) 0:00:06.351 ******* 2026-02-27 00:22:40.420355 | orchestrator | changed: [testbed-manager] 2026-02-27 00:22:40.420365 | orchestrator | 2026-02-27 00:22:40.420376 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-02-27 00:22:40.420387 | orchestrator | Friday 27 February 2026 00:22:36 +0000 (0:00:00.605) 0:00:06.956 ******* 2026-02-27 00:22:40.420397 | orchestrator | changed: [testbed-manager] 2026-02-27 00:22:40.420408 | orchestrator | 2026-02-27 00:22:40.420419 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-02-27 00:22:40.420430 | orchestrator | Friday 27 February 2026 00:22:37 +0000 (0:00:01.166) 0:00:08.123 ******* 2026-02-27 00:22:40.420441 | orchestrator | ok: [testbed-manager] 2026-02-27 00:22:40.420451 | orchestrator | 2026-02-27 00:22:40.420462 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-02-27 00:22:40.420473 | orchestrator | Friday 27 February 2026 00:22:38 +0000 (0:00:01.002) 0:00:09.125 ******* 2026-02-27 00:22:40.420484 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-02-27 00:22:40.420495 | orchestrator | 2026-02-27 00:22:40.420505 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-02-27 00:22:40.420516 | orchestrator | Friday 27 February 2026 00:22:38 +0000 (0:00:00.076) 0:00:09.202 ******* 2026-02-27 00:22:40.420526 | orchestrator | changed: [testbed-manager] 2026-02-27 00:22:40.420537 | orchestrator | 2026-02-27 00:22:40.420548 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 00:22:40.420560 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-27 00:22:40.420570 | orchestrator | 2026-02-27 00:22:40.420581 | orchestrator | 2026-02-27 00:22:40.420592 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 00:22:40.420602 | orchestrator | Friday 27 February 2026 00:22:40 +0000 (0:00:01.183) 0:00:10.385 ******* 2026-02-27 00:22:40.420613 | orchestrator | =============================================================================== 2026-02-27 00:22:40.420623 | orchestrator | Gathering Facts --------------------------------------------------------- 4.06s 2026-02-27 00:22:40.420634 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.21s 2026-02-27 00:22:40.420645 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.18s 2026-02-27 00:22:40.420655 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.17s 2026-02-27 00:22:40.420666 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.00s 2026-02-27 00:22:40.420677 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.61s 2026-02-27 00:22:40.420704 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.55s 2026-02-27 00:22:40.420715 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2026-02-27 00:22:40.420726 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-02-27 00:22:40.420736 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2026-02-27 00:22:40.420747 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2026-02-27 00:22:40.420757 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2026-02-27 00:22:40.420775 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2026-02-27 00:22:40.757340 | orchestrator | + osism apply sshconfig 2026-02-27 00:22:52.936458 | orchestrator | 2026-02-27 00:22:52 | INFO  | Task 15eb9d11-373f-43bc-9420-c9d733b21ac4 (sshconfig) was prepared for execution. 2026-02-27 00:22:52.936588 | orchestrator | 2026-02-27 00:22:52 | INFO  | It takes a moment until task 15eb9d11-373f-43bc-9420-c9d733b21ac4 (sshconfig) has been started and output is visible here. 2026-02-27 00:23:05.268533 | orchestrator | 2026-02-27 00:23:05.268622 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-02-27 00:23:05.268632 | orchestrator | 2026-02-27 00:23:05.268656 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-02-27 00:23:05.268664 | orchestrator | Friday 27 February 2026 00:22:57 +0000 (0:00:00.166) 0:00:00.166 ******* 2026-02-27 00:23:05.268670 | orchestrator | ok: [testbed-manager] 2026-02-27 00:23:05.268678 | orchestrator | 2026-02-27 00:23:05.268684 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-02-27 00:23:05.268690 | orchestrator | Friday 27 February 2026 00:22:57 +0000 (0:00:00.562) 0:00:00.728 ******* 2026-02-27 00:23:05.268696 | orchestrator | changed: [testbed-manager] 2026-02-27 00:23:05.268704 | orchestrator | 2026-02-27 00:23:05.268710 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-02-27 00:23:05.268716 | orchestrator | Friday 27 February 2026 00:22:58 +0000 (0:00:00.538) 0:00:01.267 ******* 2026-02-27 00:23:05.268722 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-02-27 00:23:05.268729 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-02-27 00:23:05.268735 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-02-27 00:23:05.268742 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-02-27 00:23:05.268748 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-02-27 00:23:05.268754 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-02-27 00:23:05.268761 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-02-27 00:23:05.268767 | orchestrator | 2026-02-27 00:23:05.268774 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-02-27 00:23:05.268780 | orchestrator | Friday 27 February 2026 00:23:04 +0000 (0:00:05.953) 0:00:07.221 ******* 2026-02-27 00:23:05.268787 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:23:05.268794 | orchestrator | 2026-02-27 00:23:05.268800 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-02-27 00:23:05.268807 | orchestrator | Friday 27 February 2026 00:23:04 +0000 (0:00:00.075) 0:00:07.297 ******* 2026-02-27 00:23:05.268813 | orchestrator | changed: [testbed-manager] 2026-02-27 00:23:05.268819 | orchestrator | 2026-02-27 00:23:05.268825 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 00:23:05.268834 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-27 00:23:05.268841 | orchestrator | 2026-02-27 00:23:05.268847 | orchestrator | 2026-02-27 00:23:05.268853 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 00:23:05.268857 | orchestrator | Friday 27 February 2026 00:23:04 +0000 (0:00:00.629) 0:00:07.926 ******* 2026-02-27 00:23:05.268861 | orchestrator | =============================================================================== 2026-02-27 00:23:05.268866 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.95s 2026-02-27 00:23:05.268870 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.63s 2026-02-27 00:23:05.268874 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.56s 2026-02-27 00:23:05.268877 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.54s 2026-02-27 00:23:05.268881 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2026-02-27 00:23:05.599881 | orchestrator | + osism apply known-hosts 2026-02-27 00:23:17.849947 | orchestrator | 2026-02-27 00:23:17 | INFO  | Task dc813bba-aab4-4190-bed6-74f4822a65a2 (known-hosts) was prepared for execution. 2026-02-27 00:23:17.850100 | orchestrator | 2026-02-27 00:23:17 | INFO  | It takes a moment until task dc813bba-aab4-4190-bed6-74f4822a65a2 (known-hosts) has been started and output is visible here. 2026-02-27 00:23:35.508898 | orchestrator | 2026-02-27 00:23:35.509027 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-02-27 00:23:35.509046 | orchestrator | 2026-02-27 00:23:35.509108 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-02-27 00:23:35.509123 | orchestrator | Friday 27 February 2026 00:23:22 +0000 (0:00:00.174) 0:00:00.174 ******* 2026-02-27 00:23:35.509136 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-02-27 00:23:35.509147 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-02-27 00:23:35.509159 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-02-27 00:23:35.509201 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-02-27 00:23:35.509213 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-27 00:23:35.509224 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-27 00:23:35.509235 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-27 00:23:35.509246 | orchestrator | 2026-02-27 00:23:35.509258 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-02-27 00:23:35.509270 | orchestrator | Friday 27 February 2026 00:23:28 +0000 (0:00:06.267) 0:00:06.441 ******* 2026-02-27 00:23:35.509282 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-02-27 00:23:35.509295 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-02-27 00:23:35.509306 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-02-27 00:23:35.509317 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-02-27 00:23:35.509328 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-02-27 00:23:35.509349 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-02-27 00:23:35.509360 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-02-27 00:23:35.509371 | orchestrator | 2026-02-27 00:23:35.509382 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-27 00:23:35.509393 | orchestrator | Friday 27 February 2026 00:23:28 +0000 (0:00:00.206) 0:00:06.648 ******* 2026-02-27 00:23:35.509405 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLS2qCJM4vHsO3GrNeMOMXg3OZKMvJRxMOOloee8Nd9qyhmclXFLXM33kCrOAgJx9kVQ9vaDsBd1bcmydhycAqc=) 2026-02-27 00:23:35.509427 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDSH9ffbCo11QSQw8Z8IJxruLmJxIhj39wsV331gNec84WT+AmqRenQK++YEwosa0sPHZ3rGULvlL0kp8+0ZqTrW+tWCip/p7ernnt3eylu01MyZWb+Wh4dtAYKCkqChCcXIWvTEa7uQnPvqdRv++v2CeIJUZgXSoMfmsGsLk7+Lwx04GHSIszuWIAFIT26V7dczVhKqtUvqja9dQi5CP8G6f1kA/eeXKK5tHBCtBHaJzt/svq44v01MOXduezlZpStFzHmIjSj2QLUFyiF2l8s+hse6spJy9TMF7egCg8ffzOpBQ21QwyTE3E1tjJxtrFsN7BYZaDnVDmfsjwRMy6ayFJsIYU+9EN216970eInu7oznjZB+FDJb8Qmd140LhBoO11yfDF3+/sB28VyjygMHGZGemRDr3GaptM1yh/aTP36ZaOlzIykQEe2LQyeUaOH+18+7aoVVSxL4ktL2et1IfXcWxakCyArWK0hid609IsjabR2qV2aCCif0NerCRM=) 2026-02-27 00:23:35.509468 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGJ2wnl+JEEY5zTdjv8YEgLSg4uyxi+3pxsxq+YUsRJf) 2026-02-27 00:23:35.509483 | orchestrator | 2026-02-27 00:23:35.509496 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-27 00:23:35.509509 | orchestrator | Friday 27 February 2026 00:23:29 +0000 (0:00:01.239) 0:00:07.887 ******* 2026-02-27 00:23:35.509540 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCRPjtQHE69cg8IDDC9mtepGOcgr4AdKdcmjFLaP8DfWby6oBr1iUljhAG1PGdDb4w17bNDRKC4JtmPqm/nIX3Dj/H3D1rE0zfoW4SmysxZqa/3t+xD/3usj+6t6DhhFnopRAlr2bQxTJJVAs6B+O3sPi1LFjN/m/ceXJjrc9DJB1+Jmwk5lLjBoFVrsaIGWZ5zVsf+yiryLd63c1fjok4Aag5Eid1UgqBJyILtF7p+a/1U0r8OU+Fkca1jY7KpfCpMO8y+/qsV5z6VqdCivFosMZGmKT3liuwMDwK5LwgKkecyglfM08+7oij0xeFd1HvlgMK7X0PXmJ/lHIfLEErewMsqx+f9hBTy3cPDCVH2XzcoaZbd/W/vK1j1Vo6MBv7QRO8vZiOtRF/Taha8wJpiu29ELLpSnT21UmLJIrE91ynylcPSK4KTo6Of72BTlF1dY42ZyYFrucsc38Kt8IBJtN8DwIUcwEeHehc97IdcgLRndYwDDloONZKsM9BdPEc=) 2026-02-27 00:23:35.509555 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMNC/blywLU4OB6AfcC0x6uB5urS7nlrAPohJVaPu1QIKMw82SSwgc8+9VVe7Aqrwj6P9FtRDA3jk9s47sKlAqI=) 2026-02-27 00:23:35.509568 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBVf8sWHAcOYbXsj0XyGScnK21tglVslqaueMfrqKo1e) 2026-02-27 00:23:35.509580 | orchestrator | 2026-02-27 00:23:35.509592 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-27 00:23:35.509604 | orchestrator | Friday 27 February 2026 00:23:30 +0000 (0:00:01.108) 0:00:08.996 ******* 2026-02-27 00:23:35.509617 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILteQGAW7q+N46YNQNNQ9Wh4HAuEt09o5pkqPGGL1XFd) 2026-02-27 00:23:35.509636 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDlrU4H1q0X2xH/QHHY/FwIQzHQb1wFSZvajATDPAoqJst+Zmjwiir6qbE4ttcfnBgCh9Mnnctv52MWYNLKkdbtfwJy9IKTNTMOxOEwxcWtLwhKhMw1B/Q1DT3lNwlZQ5XyxjQ8yuq2On+uNOJfLEfTW5FuMwJf0D8U+TcFEe/EyGjEWC5ZsHz1iJvY0WxssfSUyw+mdkK6XoGVNwPOLMdq6ZRqFDr7dfXaAH5qaFcIc4o+UykzRCKGELBnj+nTG6DZ8Zk4uCPDgVbbs7IWR5/qCoq+tt8Q/3+CmoXJIIi1nKCH2Y0b+Y9ebsA7CQhQ/NhiBxXC95BzwA1gxBY/SsJFd+CYWjTP3+3/0YJewwPX5FBK2VJeBWAR+LMXhvM2xKyzX637vqdr8pU3OnxuMYnG6PByrtY1bcfCYU8iFh558qqRQcKngCN9w5/09HO60vxfG8kTc+Rr2bz0XOXscEpMzvbvqgq+I40HQTT6tp8FSX0JfeRK4+j3S6YwsPPXXYc=) 2026-02-27 00:23:35.509656 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIdEhba5YzhmXeZYyxervsZe0uEkbryxrDl3W8ZrGp/yGm2q3WdIQe0DHIdSV1S4k3j5x/soDfK3oIgBmdQ7oT4=) 2026-02-27 00:23:35.509675 | orchestrator | 2026-02-27 00:23:35.509693 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-27 00:23:35.509711 | orchestrator | Friday 27 February 2026 00:23:32 +0000 (0:00:01.140) 0:00:10.137 ******* 2026-02-27 00:23:35.509728 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDTN8l4p8OYUPODYbezns3zgvpPXo/i05dLw4c+mz8i4hj84t1kFb1nyhEBIR8DFftzijmQ5kWMM97BsV4RqrS0nd8/3fNHPY9e7vUw74Oo1gINSTgRyiFPVj88dVHaMqUYqc3LLtHBw7iIYde5/nArYdLZrYGwWJmcaBxLCdQDS21eCbWAZvjWXAUctdhvpDkQbJ5bcVt4RoYAGbG5NEru3n9d5Dwl+B18XRWieIw3ssAFhsJHLxPwiZB7gaWFAYyj78aSF00kmRMWs8P3xkNFQ6w57rTc/MLKji9JGxCMgq+RL7QbtGStzZKLnd5cbwNZqYkoLmn1PQFWciEenCcND4XbjtHEP8kmJstqBTdjuFCOAMxw9FinQEJQ3UCGpQ+WylbFFJRO+bL6qaUPv69yvZ9Kih2M9mcBa1oKOJDWmHs4suo3ZISpnPZSSRXojnMXEJpKIT3tM6N2Az2lOzpbhPtjdHC+2isg7s6snWb3gWtfWiq1OBEIufDVgeJ9WNs=) 2026-02-27 00:23:35.509760 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIA/uW1pP5yjkHqmW43zoJRaGHxUqGoCYRGVuKQegnrDBnX2P6Jp003S2JNJVqTsXhw/Exf8Jmokhh3w07W24vk=) 2026-02-27 00:23:35.509779 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICTL+/kbEwI/7lsf+JhLhl/Wp/uHkhqFY5dWMztRGN3k) 2026-02-27 00:23:35.509796 | orchestrator | 2026-02-27 00:23:35.509814 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-27 00:23:35.509832 | orchestrator | Friday 27 February 2026 00:23:33 +0000 (0:00:01.157) 0:00:11.294 ******* 2026-02-27 00:23:35.509946 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDHRru493N8rty3WW0tlILUxvEgKoIUx07RBn2zs2PlZrekwZKp+bivEbH5I9Rmwd5p61lepxKl4q/CxE6UWNPg=) 2026-02-27 00:23:35.509973 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCzwPzFMNxBYKVWN3T3AY8SI9ezY22s31b8KSvU3ZFMtff064yz2lHoWj06cMLlJMijH90CrB+gYa0Fo9AWn9I2TYpKKOcy/wu3cfAWMzBtdTk4jIz5ZgTV+JdXfo/jyni3ZqKw5vYLTq5LniIkS9XC/7/6Ky/LJFxh2NxVhs17osb5bd17OzxwxrCI9lYtGa1dwyQvZotu2MceH1VqiOXDaf1z6NvNsrhnjuaz/tJSuEZu8QwmfJCshjkMx19ac/3zcXbaeuKM+na2jb6OeCNPLM8NaV4Q596L43L3JhWFNNMtL7tmlg968Gsvy2E7C0Vml7ohjfzciiQ+fLSN5ylbhzgcxaWjDEd0erVp9wt8IMWIK7m38WeyL+ToMJWEoylkYQf217s+shgXVd9J5rx/MPUhuQEMtNQABwJz7kmJ7yGeHNgAJt4q7+CvSXT78YmeuaT3GaSY5jLeCZnTc5D4bt/dlR1IJmB6FhyfE1wfZ3ZVjgiP1Z3LOA3V/eCex2s=) 2026-02-27 00:23:35.509987 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAhCyJAeP4Voqqocdu5VMGzQB6gjKdYU7KSqnjBv1LNX) 2026-02-27 00:23:35.509998 | orchestrator | 2026-02-27 00:23:35.510009 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-27 00:23:35.510080 | orchestrator | Friday 27 February 2026 00:23:34 +0000 (0:00:01.084) 0:00:12.378 ******* 2026-02-27 00:23:35.510107 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDmirE3V8OnY9yeXQU0oQyOEqWkjgEF6Dt7vrWXFb+9/Pa0P8yBnxdGV1fplF2FhNOCBO7r0WN/QZJoeXuf6dYGP9WZcniRbxk29rHR7zgSc+fZlhz32O5Rr8oog2v6bJNpTbMqSvyTNyXtxc88Ew5jYoybhcNEaIvOmI/PfkLGxWY8NBQnDhcuW97B1z2Tm5EUEYFlIkMqcyokm7ET/6dilLw3+XEh/XLRuisW3nT0piS4gDBjzBzhakXb8H5tjMM6oAa0/zhDSnYhC19wE0oqeQEGy8gN8EsLotT9hDg35NmEMeoiRgKaYJ/t6aFAvlWehmSUmkaUhT4Clxgwgqp4XqOIzviO3hSd/s/1iVfTFbi0G3DyuBk35kcw/PiN7OL2zMOo3cqDE/dYoc6x5UAflqtFL5UQ3x/+SfG/auwpWI5wO9aFp43Pkd+vXgE/OPfNQfOMn2wo0ja2fWrh2AZJiAVYMg92hquwuOew6ilicKS2rD3AeJ9M/YMa6Q164t8=) 2026-02-27 00:23:46.656676 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAAeb2riZ1/gGRYDq9JXcutKVNlBGNgjmtIylidpyFTVRajQsf4zJHGS/24LEGkTZ7W3DntSBkClnnQp0lCgAPk=) 2026-02-27 00:23:46.656787 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPDBZPkILep61YPlU/C1IjZ1YbtAs6htsCb+bmeoivWu) 2026-02-27 00:23:46.656803 | orchestrator | 2026-02-27 00:23:46.656815 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-27 00:23:46.656827 | orchestrator | Friday 27 February 2026 00:23:35 +0000 (0:00:01.158) 0:00:13.537 ******* 2026-02-27 00:23:46.656840 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCdneq5ewP7LITFfxQPjI1jMp2Yy1pKtBacrK88a0Mnl58+WK9ccq+mEm9r/ja1e9PNDGWFk3XID6J4hS31NMLEMifRuJ7Jj6kM7asuZBfzEnxs6rgTjfuxwDv95yg1dT1KgEM5/mlJR3llHLZpNYR6gD24HpKDa4OTuIgezjNsmCNde9eya1XEWep+A2uWMhoQkAuNoQFv98mGlF6AXVJZs7M0v/WVPNZ1Rmp37HVO2aFAr0/odlgmhFmE1TIEjA2Pn3m2AVgH+NsR6Zn8y7cB6pPfRS30Qu3pv/5V6qwpBRIoO7X0hwM9fy6n+TVO2NqMdNwtoGy5pkiRu4ZRmbVnhEL7eKXS2d6V3/Vy2VuYCgfRwAb+dtJG0oYKGzydJpT4PeOiyV4qCfOuf6i8KFqU0xOAFZzHxa5chbjv84hXQ2L0UTbpcgw9wo0UvkY8ow29F91UloBcAkbB2H3u12pD2BNLjOr00vQKZ7N8/DOGe6bfj91pkQMH1UtaKP4yEjE=) 2026-02-27 00:23:46.656853 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO456Sq40ZxXqJihcPuewQuiSRFjJRpjhTtR+GNsOXI5a+Kb+VctMUJGIASui+OPXb1X/duF0XLItWnIRiGriuY=) 2026-02-27 00:23:46.656888 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILt6Z4+MJPgU/5HYSRk8dLq2Ge4eguAkJ1EKHR506ym4) 2026-02-27 00:23:46.656899 | orchestrator | 2026-02-27 00:23:46.656909 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-02-27 00:23:46.656919 | orchestrator | Friday 27 February 2026 00:23:36 +0000 (0:00:01.101) 0:00:14.638 ******* 2026-02-27 00:23:46.656929 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-02-27 00:23:46.656939 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-02-27 00:23:46.656948 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-02-27 00:23:46.656958 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-02-27 00:23:46.656968 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-27 00:23:46.656977 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-27 00:23:46.656986 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-27 00:23:46.656996 | orchestrator | 2026-02-27 00:23:46.657006 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-02-27 00:23:46.657016 | orchestrator | Friday 27 February 2026 00:23:41 +0000 (0:00:05.385) 0:00:20.023 ******* 2026-02-27 00:23:46.657027 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-02-27 00:23:46.657038 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-02-27 00:23:46.657048 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-02-27 00:23:46.657058 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-02-27 00:23:46.657068 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-02-27 00:23:46.657077 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-02-27 00:23:46.657087 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-02-27 00:23:46.657097 | orchestrator | 2026-02-27 00:23:46.657106 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-27 00:23:46.657116 | orchestrator | Friday 27 February 2026 00:23:42 +0000 (0:00:00.176) 0:00:20.200 ******* 2026-02-27 00:23:46.657125 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGJ2wnl+JEEY5zTdjv8YEgLSg4uyxi+3pxsxq+YUsRJf) 2026-02-27 00:23:46.657544 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDSH9ffbCo11QSQw8Z8IJxruLmJxIhj39wsV331gNec84WT+AmqRenQK++YEwosa0sPHZ3rGULvlL0kp8+0ZqTrW+tWCip/p7ernnt3eylu01MyZWb+Wh4dtAYKCkqChCcXIWvTEa7uQnPvqdRv++v2CeIJUZgXSoMfmsGsLk7+Lwx04GHSIszuWIAFIT26V7dczVhKqtUvqja9dQi5CP8G6f1kA/eeXKK5tHBCtBHaJzt/svq44v01MOXduezlZpStFzHmIjSj2QLUFyiF2l8s+hse6spJy9TMF7egCg8ffzOpBQ21QwyTE3E1tjJxtrFsN7BYZaDnVDmfsjwRMy6ayFJsIYU+9EN216970eInu7oznjZB+FDJb8Qmd140LhBoO11yfDF3+/sB28VyjygMHGZGemRDr3GaptM1yh/aTP36ZaOlzIykQEe2LQyeUaOH+18+7aoVVSxL4ktL2et1IfXcWxakCyArWK0hid609IsjabR2qV2aCCif0NerCRM=) 2026-02-27 00:23:46.657598 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLS2qCJM4vHsO3GrNeMOMXg3OZKMvJRxMOOloee8Nd9qyhmclXFLXM33kCrOAgJx9kVQ9vaDsBd1bcmydhycAqc=) 2026-02-27 00:23:46.657643 | orchestrator | 2026-02-27 00:23:46.657664 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-27 00:23:46.657682 | orchestrator | Friday 27 February 2026 00:23:43 +0000 (0:00:01.139) 0:00:21.340 ******* 2026-02-27 00:23:46.657699 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCRPjtQHE69cg8IDDC9mtepGOcgr4AdKdcmjFLaP8DfWby6oBr1iUljhAG1PGdDb4w17bNDRKC4JtmPqm/nIX3Dj/H3D1rE0zfoW4SmysxZqa/3t+xD/3usj+6t6DhhFnopRAlr2bQxTJJVAs6B+O3sPi1LFjN/m/ceXJjrc9DJB1+Jmwk5lLjBoFVrsaIGWZ5zVsf+yiryLd63c1fjok4Aag5Eid1UgqBJyILtF7p+a/1U0r8OU+Fkca1jY7KpfCpMO8y+/qsV5z6VqdCivFosMZGmKT3liuwMDwK5LwgKkecyglfM08+7oij0xeFd1HvlgMK7X0PXmJ/lHIfLEErewMsqx+f9hBTy3cPDCVH2XzcoaZbd/W/vK1j1Vo6MBv7QRO8vZiOtRF/Taha8wJpiu29ELLpSnT21UmLJIrE91ynylcPSK4KTo6Of72BTlF1dY42ZyYFrucsc38Kt8IBJtN8DwIUcwEeHehc97IdcgLRndYwDDloONZKsM9BdPEc=) 2026-02-27 00:23:46.657716 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMNC/blywLU4OB6AfcC0x6uB5urS7nlrAPohJVaPu1QIKMw82SSwgc8+9VVe7Aqrwj6P9FtRDA3jk9s47sKlAqI=) 2026-02-27 00:23:46.657733 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBVf8sWHAcOYbXsj0XyGScnK21tglVslqaueMfrqKo1e) 2026-02-27 00:23:46.657749 | orchestrator | 2026-02-27 00:23:46.657801 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-27 00:23:46.657819 | orchestrator | Friday 27 February 2026 00:23:44 +0000 (0:00:01.137) 0:00:22.477 ******* 2026-02-27 00:23:46.657832 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILteQGAW7q+N46YNQNNQ9Wh4HAuEt09o5pkqPGGL1XFd) 2026-02-27 00:23:46.657842 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDlrU4H1q0X2xH/QHHY/FwIQzHQb1wFSZvajATDPAoqJst+Zmjwiir6qbE4ttcfnBgCh9Mnnctv52MWYNLKkdbtfwJy9IKTNTMOxOEwxcWtLwhKhMw1B/Q1DT3lNwlZQ5XyxjQ8yuq2On+uNOJfLEfTW5FuMwJf0D8U+TcFEe/EyGjEWC5ZsHz1iJvY0WxssfSUyw+mdkK6XoGVNwPOLMdq6ZRqFDr7dfXaAH5qaFcIc4o+UykzRCKGELBnj+nTG6DZ8Zk4uCPDgVbbs7IWR5/qCoq+tt8Q/3+CmoXJIIi1nKCH2Y0b+Y9ebsA7CQhQ/NhiBxXC95BzwA1gxBY/SsJFd+CYWjTP3+3/0YJewwPX5FBK2VJeBWAR+LMXhvM2xKyzX637vqdr8pU3OnxuMYnG6PByrtY1bcfCYU8iFh558qqRQcKngCN9w5/09HO60vxfG8kTc+Rr2bz0XOXscEpMzvbvqgq+I40HQTT6tp8FSX0JfeRK4+j3S6YwsPPXXYc=) 2026-02-27 00:23:46.657853 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIdEhba5YzhmXeZYyxervsZe0uEkbryxrDl3W8ZrGp/yGm2q3WdIQe0DHIdSV1S4k3j5x/soDfK3oIgBmdQ7oT4=) 2026-02-27 00:23:46.657863 | orchestrator | 2026-02-27 00:23:46.657872 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-27 00:23:46.657882 | orchestrator | Friday 27 February 2026 00:23:45 +0000 (0:00:01.134) 0:00:23.612 ******* 2026-02-27 00:23:46.657891 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICTL+/kbEwI/7lsf+JhLhl/Wp/uHkhqFY5dWMztRGN3k) 2026-02-27 00:23:46.657901 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDTN8l4p8OYUPODYbezns3zgvpPXo/i05dLw4c+mz8i4hj84t1kFb1nyhEBIR8DFftzijmQ5kWMM97BsV4RqrS0nd8/3fNHPY9e7vUw74Oo1gINSTgRyiFPVj88dVHaMqUYqc3LLtHBw7iIYde5/nArYdLZrYGwWJmcaBxLCdQDS21eCbWAZvjWXAUctdhvpDkQbJ5bcVt4RoYAGbG5NEru3n9d5Dwl+B18XRWieIw3ssAFhsJHLxPwiZB7gaWFAYyj78aSF00kmRMWs8P3xkNFQ6w57rTc/MLKji9JGxCMgq+RL7QbtGStzZKLnd5cbwNZqYkoLmn1PQFWciEenCcND4XbjtHEP8kmJstqBTdjuFCOAMxw9FinQEJQ3UCGpQ+WylbFFJRO+bL6qaUPv69yvZ9Kih2M9mcBa1oKOJDWmHs4suo3ZISpnPZSSRXojnMXEJpKIT3tM6N2Az2lOzpbhPtjdHC+2isg7s6snWb3gWtfWiq1OBEIufDVgeJ9WNs=) 2026-02-27 00:23:46.657930 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIA/uW1pP5yjkHqmW43zoJRaGHxUqGoCYRGVuKQegnrDBnX2P6Jp003S2JNJVqTsXhw/Exf8Jmokhh3w07W24vk=) 2026-02-27 00:23:51.312139 | orchestrator | 2026-02-27 00:23:51.312356 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-27 00:23:51.312386 | orchestrator | Friday 27 February 2026 00:23:46 +0000 (0:00:01.073) 0:00:24.685 ******* 2026-02-27 00:23:51.312407 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAhCyJAeP4Voqqocdu5VMGzQB6gjKdYU7KSqnjBv1LNX) 2026-02-27 00:23:51.312431 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCzwPzFMNxBYKVWN3T3AY8SI9ezY22s31b8KSvU3ZFMtff064yz2lHoWj06cMLlJMijH90CrB+gYa0Fo9AWn9I2TYpKKOcy/wu3cfAWMzBtdTk4jIz5ZgTV+JdXfo/jyni3ZqKw5vYLTq5LniIkS9XC/7/6Ky/LJFxh2NxVhs17osb5bd17OzxwxrCI9lYtGa1dwyQvZotu2MceH1VqiOXDaf1z6NvNsrhnjuaz/tJSuEZu8QwmfJCshjkMx19ac/3zcXbaeuKM+na2jb6OeCNPLM8NaV4Q596L43L3JhWFNNMtL7tmlg968Gsvy2E7C0Vml7ohjfzciiQ+fLSN5ylbhzgcxaWjDEd0erVp9wt8IMWIK7m38WeyL+ToMJWEoylkYQf217s+shgXVd9J5rx/MPUhuQEMtNQABwJz7kmJ7yGeHNgAJt4q7+CvSXT78YmeuaT3GaSY5jLeCZnTc5D4bt/dlR1IJmB6FhyfE1wfZ3ZVjgiP1Z3LOA3V/eCex2s=) 2026-02-27 00:23:51.312457 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDHRru493N8rty3WW0tlILUxvEgKoIUx07RBn2zs2PlZrekwZKp+bivEbH5I9Rmwd5p61lepxKl4q/CxE6UWNPg=) 2026-02-27 00:23:51.312479 | orchestrator | 2026-02-27 00:23:51.312496 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-27 00:23:51.312518 | orchestrator | Friday 27 February 2026 00:23:47 +0000 (0:00:01.126) 0:00:25.812 ******* 2026-02-27 00:23:51.312537 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDmirE3V8OnY9yeXQU0oQyOEqWkjgEF6Dt7vrWXFb+9/Pa0P8yBnxdGV1fplF2FhNOCBO7r0WN/QZJoeXuf6dYGP9WZcniRbxk29rHR7zgSc+fZlhz32O5Rr8oog2v6bJNpTbMqSvyTNyXtxc88Ew5jYoybhcNEaIvOmI/PfkLGxWY8NBQnDhcuW97B1z2Tm5EUEYFlIkMqcyokm7ET/6dilLw3+XEh/XLRuisW3nT0piS4gDBjzBzhakXb8H5tjMM6oAa0/zhDSnYhC19wE0oqeQEGy8gN8EsLotT9hDg35NmEMeoiRgKaYJ/t6aFAvlWehmSUmkaUhT4Clxgwgqp4XqOIzviO3hSd/s/1iVfTFbi0G3DyuBk35kcw/PiN7OL2zMOo3cqDE/dYoc6x5UAflqtFL5UQ3x/+SfG/auwpWI5wO9aFp43Pkd+vXgE/OPfNQfOMn2wo0ja2fWrh2AZJiAVYMg92hquwuOew6ilicKS2rD3AeJ9M/YMa6Q164t8=) 2026-02-27 00:23:51.312556 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAAeb2riZ1/gGRYDq9JXcutKVNlBGNgjmtIylidpyFTVRajQsf4zJHGS/24LEGkTZ7W3DntSBkClnnQp0lCgAPk=) 2026-02-27 00:23:51.312575 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPDBZPkILep61YPlU/C1IjZ1YbtAs6htsCb+bmeoivWu) 2026-02-27 00:23:51.312593 | orchestrator | 2026-02-27 00:23:51.312610 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-27 00:23:51.312626 | orchestrator | Friday 27 February 2026 00:23:48 +0000 (0:00:01.103) 0:00:26.915 ******* 2026-02-27 00:23:51.312666 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCdneq5ewP7LITFfxQPjI1jMp2Yy1pKtBacrK88a0Mnl58+WK9ccq+mEm9r/ja1e9PNDGWFk3XID6J4hS31NMLEMifRuJ7Jj6kM7asuZBfzEnxs6rgTjfuxwDv95yg1dT1KgEM5/mlJR3llHLZpNYR6gD24HpKDa4OTuIgezjNsmCNde9eya1XEWep+A2uWMhoQkAuNoQFv98mGlF6AXVJZs7M0v/WVPNZ1Rmp37HVO2aFAr0/odlgmhFmE1TIEjA2Pn3m2AVgH+NsR6Zn8y7cB6pPfRS30Qu3pv/5V6qwpBRIoO7X0hwM9fy6n+TVO2NqMdNwtoGy5pkiRu4ZRmbVnhEL7eKXS2d6V3/Vy2VuYCgfRwAb+dtJG0oYKGzydJpT4PeOiyV4qCfOuf6i8KFqU0xOAFZzHxa5chbjv84hXQ2L0UTbpcgw9wo0UvkY8ow29F91UloBcAkbB2H3u12pD2BNLjOr00vQKZ7N8/DOGe6bfj91pkQMH1UtaKP4yEjE=) 2026-02-27 00:23:51.312688 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILt6Z4+MJPgU/5HYSRk8dLq2Ge4eguAkJ1EKHR506ym4) 2026-02-27 00:23:51.312708 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO456Sq40ZxXqJihcPuewQuiSRFjJRpjhTtR+GNsOXI5a+Kb+VctMUJGIASui+OPXb1X/duF0XLItWnIRiGriuY=) 2026-02-27 00:23:51.312727 | orchestrator | 2026-02-27 00:23:51.312744 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-02-27 00:23:51.312793 | orchestrator | Friday 27 February 2026 00:23:49 +0000 (0:00:01.118) 0:00:28.033 ******* 2026-02-27 00:23:51.312813 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-02-27 00:23:51.312832 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-02-27 00:23:51.312849 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-02-27 00:23:51.312865 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-02-27 00:23:51.312881 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-27 00:23:51.312897 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-27 00:23:51.312914 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-27 00:23:51.312932 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:23:51.312953 | orchestrator | 2026-02-27 00:23:51.312995 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-02-27 00:23:51.313015 | orchestrator | Friday 27 February 2026 00:23:50 +0000 (0:00:00.172) 0:00:28.205 ******* 2026-02-27 00:23:51.313033 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:23:51.313050 | orchestrator | 2026-02-27 00:23:51.313066 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-02-27 00:23:51.313083 | orchestrator | Friday 27 February 2026 00:23:50 +0000 (0:00:00.072) 0:00:28.278 ******* 2026-02-27 00:23:51.313111 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:23:51.313128 | orchestrator | 2026-02-27 00:23:51.313147 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-02-27 00:23:51.313202 | orchestrator | Friday 27 February 2026 00:23:50 +0000 (0:00:00.060) 0:00:28.338 ******* 2026-02-27 00:23:51.313219 | orchestrator | changed: [testbed-manager] 2026-02-27 00:23:51.313235 | orchestrator | 2026-02-27 00:23:51.313250 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 00:23:51.313267 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-27 00:23:51.313284 | orchestrator | 2026-02-27 00:23:51.313301 | orchestrator | 2026-02-27 00:23:51.313318 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 00:23:51.313335 | orchestrator | Friday 27 February 2026 00:23:51 +0000 (0:00:00.789) 0:00:29.128 ******* 2026-02-27 00:23:51.313351 | orchestrator | =============================================================================== 2026-02-27 00:23:51.313367 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.27s 2026-02-27 00:23:51.313383 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.39s 2026-02-27 00:23:51.313399 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.24s 2026-02-27 00:23:51.313414 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2026-02-27 00:23:51.313430 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2026-02-27 00:23:51.313446 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-02-27 00:23:51.313463 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-02-27 00:23:51.313479 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-02-27 00:23:51.313495 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-02-27 00:23:51.313512 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-02-27 00:23:51.313528 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-02-27 00:23:51.313544 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-02-27 00:23:51.313560 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-02-27 00:23:51.313575 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-02-27 00:23:51.313610 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-02-27 00:23:51.313628 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-02-27 00:23:51.313644 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.79s 2026-02-27 00:23:51.313659 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.21s 2026-02-27 00:23:51.313678 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.18s 2026-02-27 00:23:51.313693 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.17s 2026-02-27 00:23:51.634360 | orchestrator | + osism apply squid 2026-02-27 00:24:03.772728 | orchestrator | 2026-02-27 00:24:03 | INFO  | Task 5475255f-a414-4002-8eac-40624ecf58b4 (squid) was prepared for execution. 2026-02-27 00:24:03.772811 | orchestrator | 2026-02-27 00:24:03 | INFO  | It takes a moment until task 5475255f-a414-4002-8eac-40624ecf58b4 (squid) has been started and output is visible here. 2026-02-27 00:26:02.253828 | orchestrator | 2026-02-27 00:26:02.253956 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-02-27 00:26:02.253977 | orchestrator | 2026-02-27 00:26:02.253993 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-02-27 00:26:02.254008 | orchestrator | Friday 27 February 2026 00:24:08 +0000 (0:00:00.168) 0:00:00.168 ******* 2026-02-27 00:26:02.254156 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-02-27 00:26:02.254170 | orchestrator | 2026-02-27 00:26:02.254178 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-02-27 00:26:02.254186 | orchestrator | Friday 27 February 2026 00:24:08 +0000 (0:00:00.096) 0:00:00.265 ******* 2026-02-27 00:26:02.254195 | orchestrator | ok: [testbed-manager] 2026-02-27 00:26:02.254205 | orchestrator | 2026-02-27 00:26:02.254213 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-02-27 00:26:02.254221 | orchestrator | Friday 27 February 2026 00:24:09 +0000 (0:00:01.550) 0:00:01.816 ******* 2026-02-27 00:26:02.254230 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-02-27 00:26:02.254238 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-02-27 00:26:02.254247 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-02-27 00:26:02.254255 | orchestrator | 2026-02-27 00:26:02.254263 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-02-27 00:26:02.254271 | orchestrator | Friday 27 February 2026 00:24:10 +0000 (0:00:01.220) 0:00:03.037 ******* 2026-02-27 00:26:02.254279 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-02-27 00:26:02.254287 | orchestrator | 2026-02-27 00:26:02.254295 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-02-27 00:26:02.254303 | orchestrator | Friday 27 February 2026 00:24:12 +0000 (0:00:01.124) 0:00:04.161 ******* 2026-02-27 00:26:02.254311 | orchestrator | ok: [testbed-manager] 2026-02-27 00:26:02.254320 | orchestrator | 2026-02-27 00:26:02.254328 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-02-27 00:26:02.254336 | orchestrator | Friday 27 February 2026 00:24:12 +0000 (0:00:00.380) 0:00:04.542 ******* 2026-02-27 00:26:02.254345 | orchestrator | changed: [testbed-manager] 2026-02-27 00:26:02.254353 | orchestrator | 2026-02-27 00:26:02.254363 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-02-27 00:26:02.254373 | orchestrator | Friday 27 February 2026 00:24:13 +0000 (0:00:00.946) 0:00:05.488 ******* 2026-02-27 00:26:02.254382 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-02-27 00:26:02.254396 | orchestrator | ok: [testbed-manager] 2026-02-27 00:26:02.254405 | orchestrator | 2026-02-27 00:26:02.254416 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-02-27 00:26:02.254450 | orchestrator | Friday 27 February 2026 00:24:49 +0000 (0:00:35.664) 0:00:41.153 ******* 2026-02-27 00:26:02.254459 | orchestrator | changed: [testbed-manager] 2026-02-27 00:26:02.254469 | orchestrator | 2026-02-27 00:26:02.254478 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-02-27 00:26:02.254488 | orchestrator | Friday 27 February 2026 00:25:01 +0000 (0:00:12.082) 0:00:53.235 ******* 2026-02-27 00:26:02.254498 | orchestrator | Pausing for 60 seconds 2026-02-27 00:26:02.254508 | orchestrator | changed: [testbed-manager] 2026-02-27 00:26:02.254517 | orchestrator | 2026-02-27 00:26:02.254527 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-02-27 00:26:02.254536 | orchestrator | Friday 27 February 2026 00:26:01 +0000 (0:01:00.087) 0:01:53.323 ******* 2026-02-27 00:26:02.254549 | orchestrator | ok: [testbed-manager] 2026-02-27 00:26:02.254563 | orchestrator | 2026-02-27 00:26:02.254578 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-02-27 00:26:02.254606 | orchestrator | Friday 27 February 2026 00:26:01 +0000 (0:00:00.085) 0:01:53.408 ******* 2026-02-27 00:26:02.254621 | orchestrator | changed: [testbed-manager] 2026-02-27 00:26:02.254636 | orchestrator | 2026-02-27 00:26:02.254651 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 00:26:02.254666 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 00:26:02.254680 | orchestrator | 2026-02-27 00:26:02.254694 | orchestrator | 2026-02-27 00:26:02.254707 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 00:26:02.254723 | orchestrator | Friday 27 February 2026 00:26:01 +0000 (0:00:00.655) 0:01:54.064 ******* 2026-02-27 00:26:02.254737 | orchestrator | =============================================================================== 2026-02-27 00:26:02.254770 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2026-02-27 00:26:02.254785 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 35.66s 2026-02-27 00:26:02.254798 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.08s 2026-02-27 00:26:02.254813 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.55s 2026-02-27 00:26:02.254826 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.22s 2026-02-27 00:26:02.254840 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.12s 2026-02-27 00:26:02.254854 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.95s 2026-02-27 00:26:02.254868 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.66s 2026-02-27 00:26:02.254882 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.38s 2026-02-27 00:26:02.254896 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2026-02-27 00:26:02.254911 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.09s 2026-02-27 00:26:02.673310 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-27 00:26:02.673572 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-27 00:26:02.732333 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-27 00:26:02.732446 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-02-27 00:26:02.738670 | orchestrator | + set -e 2026-02-27 00:26:02.738761 | orchestrator | + NAMESPACE=kolla/release 2026-02-27 00:26:02.738777 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-27 00:26:02.746300 | orchestrator | ++ semver 9.5.0 9.0.0 2026-02-27 00:26:02.810499 | orchestrator | + [[ 1 -lt 0 ]] 2026-02-27 00:26:02.811275 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-02-27 00:26:15.195661 | orchestrator | 2026-02-27 00:26:15 | INFO  | Task 326d3eb8-1885-45c8-98df-e6d9bb46bd16 (operator) was prepared for execution. 2026-02-27 00:26:15.195768 | orchestrator | 2026-02-27 00:26:15 | INFO  | It takes a moment until task 326d3eb8-1885-45c8-98df-e6d9bb46bd16 (operator) has been started and output is visible here. 2026-02-27 00:26:31.577763 | orchestrator | 2026-02-27 00:26:31.577924 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-02-27 00:26:31.577953 | orchestrator | 2026-02-27 00:26:31.577973 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-27 00:26:31.577991 | orchestrator | Friday 27 February 2026 00:26:19 +0000 (0:00:00.157) 0:00:00.157 ******* 2026-02-27 00:26:31.578285 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:26:31.578322 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:26:31.578336 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:26:31.578349 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:26:31.578362 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:26:31.578374 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:26:31.578387 | orchestrator | 2026-02-27 00:26:31.578400 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-02-27 00:26:31.578413 | orchestrator | Friday 27 February 2026 00:26:22 +0000 (0:00:03.318) 0:00:03.476 ******* 2026-02-27 00:26:31.578425 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:26:31.578438 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:26:31.578450 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:26:31.578480 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:26:31.578520 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:26:31.578531 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:26:31.578543 | orchestrator | 2026-02-27 00:26:31.578554 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-02-27 00:26:31.578565 | orchestrator | 2026-02-27 00:26:31.578576 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-02-27 00:26:31.578588 | orchestrator | Friday 27 February 2026 00:26:23 +0000 (0:00:00.820) 0:00:04.297 ******* 2026-02-27 00:26:31.578599 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:26:31.578610 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:26:31.578621 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:26:31.578632 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:26:31.578643 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:26:31.578656 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:26:31.578667 | orchestrator | 2026-02-27 00:26:31.578678 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-02-27 00:26:31.578689 | orchestrator | Friday 27 February 2026 00:26:23 +0000 (0:00:00.197) 0:00:04.494 ******* 2026-02-27 00:26:31.578700 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:26:31.578712 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:26:31.578741 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:26:31.578753 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:26:31.578764 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:26:31.578798 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:26:31.578809 | orchestrator | 2026-02-27 00:26:31.578821 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-02-27 00:26:31.578840 | orchestrator | Friday 27 February 2026 00:26:24 +0000 (0:00:00.234) 0:00:04.729 ******* 2026-02-27 00:26:31.578860 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:26:31.578882 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:26:31.578901 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:26:31.578916 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:26:31.578927 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:26:31.578939 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:26:31.578950 | orchestrator | 2026-02-27 00:26:31.578961 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-02-27 00:26:31.578972 | orchestrator | Friday 27 February 2026 00:26:24 +0000 (0:00:00.708) 0:00:05.437 ******* 2026-02-27 00:26:31.579073 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:26:31.579087 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:26:31.579098 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:26:31.579109 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:26:31.579121 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:26:31.579132 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:26:31.579172 | orchestrator | 2026-02-27 00:26:31.579184 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-02-27 00:26:31.579446 | orchestrator | Friday 27 February 2026 00:26:25 +0000 (0:00:00.839) 0:00:06.277 ******* 2026-02-27 00:26:31.579459 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-02-27 00:26:31.579470 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-02-27 00:26:31.579481 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-02-27 00:26:31.579492 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-02-27 00:26:31.579503 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-02-27 00:26:31.579513 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-02-27 00:26:31.579524 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-02-27 00:26:31.579535 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-02-27 00:26:31.579545 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-02-27 00:26:31.579556 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-02-27 00:26:31.579567 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-02-27 00:26:31.579578 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-02-27 00:26:31.579588 | orchestrator | 2026-02-27 00:26:31.579599 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-02-27 00:26:31.579610 | orchestrator | Friday 27 February 2026 00:26:26 +0000 (0:00:01.176) 0:00:07.453 ******* 2026-02-27 00:26:31.579621 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:26:31.579632 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:26:31.579643 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:26:31.579654 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:26:31.579664 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:26:31.579675 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:26:31.579686 | orchestrator | 2026-02-27 00:26:31.579698 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-02-27 00:26:31.579710 | orchestrator | Friday 27 February 2026 00:26:28 +0000 (0:00:01.171) 0:00:08.625 ******* 2026-02-27 00:26:31.579721 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-02-27 00:26:31.579732 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-02-27 00:26:31.579742 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-02-27 00:26:31.579753 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-02-27 00:26:31.579793 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-02-27 00:26:31.579804 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-02-27 00:26:31.579815 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-02-27 00:26:31.579825 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-02-27 00:26:31.579836 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-02-27 00:26:31.579847 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-02-27 00:26:31.579857 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-02-27 00:26:31.579868 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-02-27 00:26:31.579879 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-02-27 00:26:31.579890 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-02-27 00:26:31.579900 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-02-27 00:26:31.579911 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-02-27 00:26:31.579922 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-02-27 00:26:31.579932 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-02-27 00:26:31.579943 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-02-27 00:26:31.579954 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-02-27 00:26:31.579975 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-02-27 00:26:31.579986 | orchestrator | 2026-02-27 00:26:31.579997 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-02-27 00:26:31.580070 | orchestrator | Friday 27 February 2026 00:26:29 +0000 (0:00:01.176) 0:00:09.801 ******* 2026-02-27 00:26:31.580082 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:26:31.580093 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:26:31.580104 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:26:31.580115 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:26:31.580125 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:26:31.580136 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:26:31.580147 | orchestrator | 2026-02-27 00:26:31.580158 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-02-27 00:26:31.580169 | orchestrator | Friday 27 February 2026 00:26:29 +0000 (0:00:00.175) 0:00:09.977 ******* 2026-02-27 00:26:31.580179 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:26:31.580190 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:26:31.580201 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:26:31.580212 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:26:31.580223 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:26:31.580233 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:26:31.580244 | orchestrator | 2026-02-27 00:26:31.580255 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-02-27 00:26:31.580266 | orchestrator | Friday 27 February 2026 00:26:29 +0000 (0:00:00.196) 0:00:10.174 ******* 2026-02-27 00:26:31.580277 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:26:31.580287 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:26:31.580298 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:26:31.580308 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:26:31.580319 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:26:31.580330 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:26:31.580340 | orchestrator | 2026-02-27 00:26:31.580351 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-02-27 00:26:31.580362 | orchestrator | Friday 27 February 2026 00:26:30 +0000 (0:00:00.592) 0:00:10.767 ******* 2026-02-27 00:26:31.580373 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:26:31.580383 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:26:31.580394 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:26:31.580405 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:26:31.580428 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:26:31.580439 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:26:31.580450 | orchestrator | 2026-02-27 00:26:31.580483 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-02-27 00:26:31.580495 | orchestrator | Friday 27 February 2026 00:26:30 +0000 (0:00:00.224) 0:00:10.991 ******* 2026-02-27 00:26:31.580506 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-27 00:26:31.580517 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-27 00:26:31.580528 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-27 00:26:31.580539 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:26:31.580550 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:26:31.580560 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:26:31.580571 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-27 00:26:31.580582 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-27 00:26:31.580593 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-02-27 00:26:31.580603 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:26:31.580613 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:26:31.580623 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:26:31.580632 | orchestrator | 2026-02-27 00:26:31.580642 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-02-27 00:26:31.580652 | orchestrator | Friday 27 February 2026 00:26:31 +0000 (0:00:00.727) 0:00:11.719 ******* 2026-02-27 00:26:31.580668 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:26:31.580678 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:26:31.580687 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:26:31.580697 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:26:31.580707 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:26:31.580716 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:26:31.580726 | orchestrator | 2026-02-27 00:26:31.580736 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-02-27 00:26:31.580746 | orchestrator | Friday 27 February 2026 00:26:31 +0000 (0:00:00.195) 0:00:11.914 ******* 2026-02-27 00:26:31.580755 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:26:31.580765 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:26:31.580775 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:26:31.580784 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:26:31.580803 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:26:33.061325 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:26:33.061432 | orchestrator | 2026-02-27 00:26:33.061447 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-02-27 00:26:33.061459 | orchestrator | Friday 27 February 2026 00:26:31 +0000 (0:00:00.196) 0:00:12.111 ******* 2026-02-27 00:26:33.061471 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:26:33.061483 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:26:33.061494 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:26:33.061505 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:26:33.061516 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:26:33.061527 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:26:33.061538 | orchestrator | 2026-02-27 00:26:33.061549 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-02-27 00:26:33.061560 | orchestrator | Friday 27 February 2026 00:26:31 +0000 (0:00:00.198) 0:00:12.309 ******* 2026-02-27 00:26:33.061571 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:26:33.061582 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:26:33.061610 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:26:33.061622 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:26:33.061632 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:26:33.061643 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:26:33.061654 | orchestrator | 2026-02-27 00:26:33.061664 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-02-27 00:26:33.061675 | orchestrator | Friday 27 February 2026 00:26:32 +0000 (0:00:00.667) 0:00:12.977 ******* 2026-02-27 00:26:33.061686 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:26:33.061696 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:26:33.061708 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:26:33.061719 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:26:33.061730 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:26:33.061741 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:26:33.061751 | orchestrator | 2026-02-27 00:26:33.061763 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 00:26:33.061775 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-27 00:26:33.061787 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-27 00:26:33.061798 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-27 00:26:33.061809 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-27 00:26:33.061820 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-27 00:26:33.061851 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-27 00:26:33.061863 | orchestrator | 2026-02-27 00:26:33.061876 | orchestrator | 2026-02-27 00:26:33.061889 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 00:26:33.061903 | orchestrator | Friday 27 February 2026 00:26:32 +0000 (0:00:00.296) 0:00:13.274 ******* 2026-02-27 00:26:33.061915 | orchestrator | =============================================================================== 2026-02-27 00:26:33.061928 | orchestrator | Gathering Facts --------------------------------------------------------- 3.32s 2026-02-27 00:26:33.061940 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.18s 2026-02-27 00:26:33.061954 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.18s 2026-02-27 00:26:33.061966 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.17s 2026-02-27 00:26:33.061978 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.84s 2026-02-27 00:26:33.061991 | orchestrator | Do not require tty for all users ---------------------------------------- 0.82s 2026-02-27 00:26:33.062090 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.73s 2026-02-27 00:26:33.062108 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.71s 2026-02-27 00:26:33.062122 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.67s 2026-02-27 00:26:33.062135 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.59s 2026-02-27 00:26:33.062147 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.30s 2026-02-27 00:26:33.062194 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.23s 2026-02-27 00:26:33.062208 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.22s 2026-02-27 00:26:33.062221 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.20s 2026-02-27 00:26:33.062232 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.20s 2026-02-27 00:26:33.062243 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.20s 2026-02-27 00:26:33.062254 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.20s 2026-02-27 00:26:33.062264 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.20s 2026-02-27 00:26:33.062275 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.18s 2026-02-27 00:26:33.451698 | orchestrator | + osism apply --environment custom facts 2026-02-27 00:26:35.584568 | orchestrator | 2026-02-27 00:26:35 | INFO  | Trying to run play facts in environment custom 2026-02-27 00:26:45.668247 | orchestrator | 2026-02-27 00:26:45 | INFO  | Task c14d987b-0064-497a-8d79-baa73e84e200 (facts) was prepared for execution. 2026-02-27 00:26:45.668352 | orchestrator | 2026-02-27 00:26:45 | INFO  | It takes a moment until task c14d987b-0064-497a-8d79-baa73e84e200 (facts) has been started and output is visible here. 2026-02-27 00:27:30.986477 | orchestrator | 2026-02-27 00:27:30.986594 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-02-27 00:27:30.986611 | orchestrator | 2026-02-27 00:27:30.986623 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-27 00:27:30.986635 | orchestrator | Friday 27 February 2026 00:26:50 +0000 (0:00:00.091) 0:00:00.091 ******* 2026-02-27 00:27:30.986648 | orchestrator | ok: [testbed-manager] 2026-02-27 00:27:30.986661 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:27:30.986673 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:27:30.986683 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:27:30.986694 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:27:30.986705 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:27:30.986741 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:27:30.986753 | orchestrator | 2026-02-27 00:27:30.986764 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-02-27 00:27:30.986775 | orchestrator | Friday 27 February 2026 00:26:51 +0000 (0:00:01.497) 0:00:01.588 ******* 2026-02-27 00:27:30.986786 | orchestrator | ok: [testbed-manager] 2026-02-27 00:27:30.986797 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:27:30.986808 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:27:30.986819 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:27:30.986829 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:27:30.986840 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:27:30.986851 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:27:30.986862 | orchestrator | 2026-02-27 00:27:30.986872 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-02-27 00:27:30.986883 | orchestrator | 2026-02-27 00:27:30.986894 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-27 00:27:30.986905 | orchestrator | Friday 27 February 2026 00:26:52 +0000 (0:00:01.248) 0:00:02.837 ******* 2026-02-27 00:27:30.986916 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:27:30.986926 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:27:30.986937 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:27:30.986948 | orchestrator | 2026-02-27 00:27:30.986959 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-27 00:27:30.986971 | orchestrator | Friday 27 February 2026 00:26:52 +0000 (0:00:00.133) 0:00:02.970 ******* 2026-02-27 00:27:30.987015 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:27:30.987035 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:27:30.987051 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:27:30.987078 | orchestrator | 2026-02-27 00:27:30.987099 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-27 00:27:30.987117 | orchestrator | Friday 27 February 2026 00:26:53 +0000 (0:00:00.232) 0:00:03.202 ******* 2026-02-27 00:27:30.987135 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:27:30.987154 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:27:30.987172 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:27:30.987188 | orchestrator | 2026-02-27 00:27:30.987207 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-27 00:27:30.987227 | orchestrator | Friday 27 February 2026 00:26:53 +0000 (0:00:00.236) 0:00:03.439 ******* 2026-02-27 00:27:30.987247 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-27 00:27:30.987267 | orchestrator | 2026-02-27 00:27:30.987285 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-27 00:27:30.987305 | orchestrator | Friday 27 February 2026 00:26:53 +0000 (0:00:00.163) 0:00:03.603 ******* 2026-02-27 00:27:30.987325 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:27:30.987343 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:27:30.987362 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:27:30.987379 | orchestrator | 2026-02-27 00:27:30.987398 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-27 00:27:30.987410 | orchestrator | Friday 27 February 2026 00:26:54 +0000 (0:00:00.443) 0:00:04.046 ******* 2026-02-27 00:27:30.987421 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:27:30.987432 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:27:30.987443 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:27:30.987454 | orchestrator | 2026-02-27 00:27:30.987465 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-27 00:27:30.987475 | orchestrator | Friday 27 February 2026 00:26:54 +0000 (0:00:00.150) 0:00:04.196 ******* 2026-02-27 00:27:30.987486 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:27:30.987497 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:27:30.987507 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:27:30.987518 | orchestrator | 2026-02-27 00:27:30.987529 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-27 00:27:30.987553 | orchestrator | Friday 27 February 2026 00:26:55 +0000 (0:00:01.146) 0:00:05.342 ******* 2026-02-27 00:27:30.987564 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:27:30.987574 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:27:30.987585 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:27:30.987596 | orchestrator | 2026-02-27 00:27:30.987607 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-27 00:27:30.987665 | orchestrator | Friday 27 February 2026 00:26:55 +0000 (0:00:00.466) 0:00:05.808 ******* 2026-02-27 00:27:30.987678 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:27:30.987689 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:27:30.987700 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:27:30.987710 | orchestrator | 2026-02-27 00:27:30.987721 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-27 00:27:30.987732 | orchestrator | Friday 27 February 2026 00:26:56 +0000 (0:00:01.065) 0:00:06.874 ******* 2026-02-27 00:27:30.987743 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:27:30.987754 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:27:30.987764 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:27:30.987775 | orchestrator | 2026-02-27 00:27:30.987786 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-02-27 00:27:30.987797 | orchestrator | Friday 27 February 2026 00:27:13 +0000 (0:00:16.414) 0:00:23.288 ******* 2026-02-27 00:27:30.987807 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:27:30.987818 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:27:30.987829 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:27:30.987840 | orchestrator | 2026-02-27 00:27:30.987851 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-02-27 00:27:30.987884 | orchestrator | Friday 27 February 2026 00:27:13 +0000 (0:00:00.123) 0:00:23.412 ******* 2026-02-27 00:27:30.987896 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:27:30.987907 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:27:30.987918 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:27:30.987929 | orchestrator | 2026-02-27 00:27:30.987939 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-27 00:27:30.987955 | orchestrator | Friday 27 February 2026 00:27:21 +0000 (0:00:08.310) 0:00:31.723 ******* 2026-02-27 00:27:30.987966 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:27:30.987977 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:27:30.988103 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:27:30.988123 | orchestrator | 2026-02-27 00:27:30.988141 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-02-27 00:27:30.988159 | orchestrator | Friday 27 February 2026 00:27:22 +0000 (0:00:00.502) 0:00:32.226 ******* 2026-02-27 00:27:30.988177 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-02-27 00:27:30.988196 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-02-27 00:27:30.988214 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-02-27 00:27:30.988234 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-02-27 00:27:30.988251 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-02-27 00:27:30.988270 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-02-27 00:27:30.988298 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-02-27 00:27:30.988317 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-02-27 00:27:30.988334 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-02-27 00:27:30.988352 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-02-27 00:27:30.988369 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-02-27 00:27:30.988386 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-02-27 00:27:30.988403 | orchestrator | 2026-02-27 00:27:30.988420 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-27 00:27:30.988457 | orchestrator | Friday 27 February 2026 00:27:25 +0000 (0:00:03.592) 0:00:35.818 ******* 2026-02-27 00:27:30.988475 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:27:30.988494 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:27:30.988512 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:27:30.988531 | orchestrator | 2026-02-27 00:27:30.988549 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-27 00:27:30.988568 | orchestrator | 2026-02-27 00:27:30.988587 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-27 00:27:30.988605 | orchestrator | Friday 27 February 2026 00:27:27 +0000 (0:00:01.435) 0:00:37.254 ******* 2026-02-27 00:27:30.988623 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:27:30.988635 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:27:30.988646 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:27:30.988657 | orchestrator | ok: [testbed-manager] 2026-02-27 00:27:30.988667 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:27:30.988678 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:27:30.988689 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:27:30.988699 | orchestrator | 2026-02-27 00:27:30.988710 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 00:27:30.988722 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 00:27:30.988733 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 00:27:30.988745 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 00:27:30.988756 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 00:27:30.988767 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-27 00:27:30.988778 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-27 00:27:30.988789 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-27 00:27:30.988799 | orchestrator | 2026-02-27 00:27:30.988810 | orchestrator | 2026-02-27 00:27:30.988821 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 00:27:30.988832 | orchestrator | Friday 27 February 2026 00:27:30 +0000 (0:00:03.686) 0:00:40.940 ******* 2026-02-27 00:27:30.988842 | orchestrator | =============================================================================== 2026-02-27 00:27:30.988853 | orchestrator | osism.commons.repository : Update package cache ------------------------ 16.41s 2026-02-27 00:27:30.988864 | orchestrator | Install required packages (Debian) -------------------------------------- 8.31s 2026-02-27 00:27:30.988874 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.69s 2026-02-27 00:27:30.988885 | orchestrator | Copy fact files --------------------------------------------------------- 3.59s 2026-02-27 00:27:30.988896 | orchestrator | Create custom facts directory ------------------------------------------- 1.50s 2026-02-27 00:27:30.988906 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.44s 2026-02-27 00:27:30.988928 | orchestrator | Copy fact file ---------------------------------------------------------- 1.25s 2026-02-27 00:27:31.241807 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.15s 2026-02-27 00:27:31.241907 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.07s 2026-02-27 00:27:31.241941 | orchestrator | Create custom facts directory ------------------------------------------- 0.50s 2026-02-27 00:27:31.241975 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.47s 2026-02-27 00:27:31.242079 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.44s 2026-02-27 00:27:31.242090 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.24s 2026-02-27 00:27:31.242101 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.23s 2026-02-27 00:27:31.242112 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.16s 2026-02-27 00:27:31.242124 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.15s 2026-02-27 00:27:31.242135 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.13s 2026-02-27 00:27:31.242146 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.12s 2026-02-27 00:27:31.576125 | orchestrator | + osism apply bootstrap 2026-02-27 00:27:43.714542 | orchestrator | 2026-02-27 00:27:43 | INFO  | Task 92d1fefe-2df4-482e-9a21-30411a5175bf (bootstrap) was prepared for execution. 2026-02-27 00:27:43.714660 | orchestrator | 2026-02-27 00:27:43 | INFO  | It takes a moment until task 92d1fefe-2df4-482e-9a21-30411a5175bf (bootstrap) has been started and output is visible here. 2026-02-27 00:28:01.201853 | orchestrator | 2026-02-27 00:28:01.202094 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-02-27 00:28:01.202118 | orchestrator | 2026-02-27 00:28:01.202131 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-02-27 00:28:01.202143 | orchestrator | Friday 27 February 2026 00:27:47 +0000 (0:00:00.175) 0:00:00.175 ******* 2026-02-27 00:28:01.202155 | orchestrator | ok: [testbed-manager] 2026-02-27 00:28:01.202170 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:28:01.202181 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:28:01.202193 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:28:01.202204 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:28:01.202215 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:28:01.202226 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:28:01.202237 | orchestrator | 2026-02-27 00:28:01.202249 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-27 00:28:01.202260 | orchestrator | 2026-02-27 00:28:01.202271 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-27 00:28:01.202282 | orchestrator | Friday 27 February 2026 00:27:48 +0000 (0:00:00.305) 0:00:00.481 ******* 2026-02-27 00:28:01.202293 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:28:01.202304 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:28:01.202315 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:28:01.202326 | orchestrator | ok: [testbed-manager] 2026-02-27 00:28:01.202336 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:28:01.202347 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:28:01.202358 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:28:01.202369 | orchestrator | 2026-02-27 00:28:01.202380 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-02-27 00:28:01.202393 | orchestrator | 2026-02-27 00:28:01.202407 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-27 00:28:01.202419 | orchestrator | Friday 27 February 2026 00:27:51 +0000 (0:00:03.738) 0:00:04.220 ******* 2026-02-27 00:28:01.202433 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-02-27 00:28:01.202446 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-02-27 00:28:01.202459 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-02-27 00:28:01.202471 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-27 00:28:01.202484 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-02-27 00:28:01.202497 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-02-27 00:28:01.202510 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-27 00:28:01.202523 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-27 00:28:01.202536 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-02-27 00:28:01.202571 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-27 00:28:01.202583 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-27 00:28:01.202594 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-02-27 00:28:01.202604 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-27 00:28:01.202615 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-27 00:28:01.202626 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-27 00:28:01.202640 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-27 00:28:01.202659 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-27 00:28:01.202675 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-27 00:28:01.202694 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-27 00:28:01.202713 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-02-27 00:28:01.202725 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-27 00:28:01.202736 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:28:01.202747 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-27 00:28:01.202758 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-27 00:28:01.202768 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-27 00:28:01.202779 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:28:01.202790 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-27 00:28:01.202800 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-27 00:28:01.202811 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-27 00:28:01.202822 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-02-27 00:28:01.202832 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-27 00:28:01.202843 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-27 00:28:01.202853 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-27 00:28:01.202864 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:28:01.202875 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-02-27 00:28:01.202886 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-27 00:28:01.202897 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:28:01.202907 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-27 00:28:01.202918 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-27 00:28:01.202929 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-27 00:28:01.202939 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-27 00:28:01.202950 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-27 00:28:01.202960 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-27 00:28:01.202994 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-27 00:28:01.203014 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-27 00:28:01.203031 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-27 00:28:01.203060 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-27 00:28:01.203072 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-27 00:28:01.203083 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-27 00:28:01.203093 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:28:01.203123 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-27 00:28:01.203135 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-27 00:28:01.203146 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-27 00:28:01.203156 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:28:01.203178 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-27 00:28:01.203189 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:28:01.203200 | orchestrator | 2026-02-27 00:28:01.203211 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-02-27 00:28:01.203222 | orchestrator | 2026-02-27 00:28:01.203233 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-02-27 00:28:01.203243 | orchestrator | Friday 27 February 2026 00:27:52 +0000 (0:00:00.504) 0:00:04.724 ******* 2026-02-27 00:28:01.203254 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:28:01.203265 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:28:01.203276 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:28:01.203286 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:28:01.203297 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:28:01.203308 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:28:01.203319 | orchestrator | ok: [testbed-manager] 2026-02-27 00:28:01.203330 | orchestrator | 2026-02-27 00:28:01.203341 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-02-27 00:28:01.203351 | orchestrator | Friday 27 February 2026 00:27:54 +0000 (0:00:02.220) 0:00:06.945 ******* 2026-02-27 00:28:01.203362 | orchestrator | ok: [testbed-manager] 2026-02-27 00:28:01.203373 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:28:01.203384 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:28:01.203394 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:28:01.203405 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:28:01.203416 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:28:01.203427 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:28:01.203437 | orchestrator | 2026-02-27 00:28:01.203448 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-02-27 00:28:01.203459 | orchestrator | Friday 27 February 2026 00:27:56 +0000 (0:00:01.380) 0:00:08.326 ******* 2026-02-27 00:28:01.203471 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:28:01.203484 | orchestrator | 2026-02-27 00:28:01.203495 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-02-27 00:28:01.203506 | orchestrator | Friday 27 February 2026 00:27:56 +0000 (0:00:00.329) 0:00:08.656 ******* 2026-02-27 00:28:01.203516 | orchestrator | changed: [testbed-manager] 2026-02-27 00:28:01.203527 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:28:01.203538 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:28:01.203549 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:28:01.203560 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:28:01.203570 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:28:01.203581 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:28:01.203592 | orchestrator | 2026-02-27 00:28:01.203603 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-02-27 00:28:01.203614 | orchestrator | Friday 27 February 2026 00:27:58 +0000 (0:00:02.153) 0:00:10.809 ******* 2026-02-27 00:28:01.203625 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:28:01.203637 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:28:01.203650 | orchestrator | 2026-02-27 00:28:01.203661 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-02-27 00:28:01.203672 | orchestrator | Friday 27 February 2026 00:27:58 +0000 (0:00:00.319) 0:00:11.128 ******* 2026-02-27 00:28:01.203683 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:28:01.203694 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:28:01.203704 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:28:01.203715 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:28:01.203726 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:28:01.203737 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:28:01.203754 | orchestrator | 2026-02-27 00:28:01.203770 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-02-27 00:28:01.203781 | orchestrator | Friday 27 February 2026 00:27:59 +0000 (0:00:01.053) 0:00:12.182 ******* 2026-02-27 00:28:01.203792 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:28:01.203803 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:28:01.203814 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:28:01.203824 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:28:01.203835 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:28:01.203845 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:28:01.203856 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:28:01.203866 | orchestrator | 2026-02-27 00:28:01.203877 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-02-27 00:28:01.203888 | orchestrator | Friday 27 February 2026 00:28:00 +0000 (0:00:00.657) 0:00:12.839 ******* 2026-02-27 00:28:01.203899 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:28:01.203910 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:28:01.203920 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:28:01.203931 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:28:01.203941 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:28:01.203952 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:28:01.203963 | orchestrator | ok: [testbed-manager] 2026-02-27 00:28:01.204000 | orchestrator | 2026-02-27 00:28:01.204013 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-02-27 00:28:01.204026 | orchestrator | Friday 27 February 2026 00:28:01 +0000 (0:00:00.451) 0:00:13.291 ******* 2026-02-27 00:28:01.204037 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:28:01.204048 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:28:01.204066 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:28:14.363714 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:28:14.363833 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:28:14.363852 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:28:14.363865 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:28:14.363878 | orchestrator | 2026-02-27 00:28:14.363892 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-02-27 00:28:14.363905 | orchestrator | Friday 27 February 2026 00:28:01 +0000 (0:00:00.242) 0:00:13.534 ******* 2026-02-27 00:28:14.363918 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:28:14.363949 | orchestrator | 2026-02-27 00:28:14.363964 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-02-27 00:28:14.364118 | orchestrator | Friday 27 February 2026 00:28:01 +0000 (0:00:00.332) 0:00:13.866 ******* 2026-02-27 00:28:14.364131 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:28:14.364139 | orchestrator | 2026-02-27 00:28:14.364147 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-02-27 00:28:14.364155 | orchestrator | Friday 27 February 2026 00:28:01 +0000 (0:00:00.317) 0:00:14.183 ******* 2026-02-27 00:28:14.364169 | orchestrator | ok: [testbed-manager] 2026-02-27 00:28:14.364182 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:28:14.364195 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:28:14.364207 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:28:14.364220 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:28:14.364234 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:28:14.364247 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:28:14.364261 | orchestrator | 2026-02-27 00:28:14.364274 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-02-27 00:28:14.364287 | orchestrator | Friday 27 February 2026 00:28:03 +0000 (0:00:01.509) 0:00:15.693 ******* 2026-02-27 00:28:14.364330 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:28:14.364346 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:28:14.364359 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:28:14.364372 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:28:14.364385 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:28:14.364399 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:28:14.364412 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:28:14.364426 | orchestrator | 2026-02-27 00:28:14.364438 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-02-27 00:28:14.364451 | orchestrator | Friday 27 February 2026 00:28:03 +0000 (0:00:00.373) 0:00:16.066 ******* 2026-02-27 00:28:14.364464 | orchestrator | ok: [testbed-manager] 2026-02-27 00:28:14.364477 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:28:14.364489 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:28:14.364502 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:28:14.364516 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:28:14.364528 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:28:14.364540 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:28:14.364551 | orchestrator | 2026-02-27 00:28:14.364563 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-02-27 00:28:14.364575 | orchestrator | Friday 27 February 2026 00:28:04 +0000 (0:00:00.586) 0:00:16.652 ******* 2026-02-27 00:28:14.364588 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:28:14.364602 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:28:14.364616 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:28:14.364628 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:28:14.364641 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:28:14.364654 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:28:14.364668 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:28:14.364681 | orchestrator | 2026-02-27 00:28:14.364695 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-02-27 00:28:14.364709 | orchestrator | Friday 27 February 2026 00:28:04 +0000 (0:00:00.266) 0:00:16.919 ******* 2026-02-27 00:28:14.364722 | orchestrator | ok: [testbed-manager] 2026-02-27 00:28:14.364733 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:28:14.364741 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:28:14.364748 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:28:14.364755 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:28:14.364763 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:28:14.364780 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:28:14.364788 | orchestrator | 2026-02-27 00:28:14.364795 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-02-27 00:28:14.364802 | orchestrator | Friday 27 February 2026 00:28:05 +0000 (0:00:00.640) 0:00:17.560 ******* 2026-02-27 00:28:14.364809 | orchestrator | ok: [testbed-manager] 2026-02-27 00:28:14.364817 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:28:14.364824 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:28:14.364850 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:28:14.364858 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:28:14.364865 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:28:14.364872 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:28:14.364879 | orchestrator | 2026-02-27 00:28:14.364886 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-02-27 00:28:14.364893 | orchestrator | Friday 27 February 2026 00:28:06 +0000 (0:00:01.130) 0:00:18.690 ******* 2026-02-27 00:28:14.364901 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:28:14.364908 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:28:14.364915 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:28:14.364922 | orchestrator | ok: [testbed-manager] 2026-02-27 00:28:14.364930 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:28:14.364937 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:28:14.364944 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:28:14.364951 | orchestrator | 2026-02-27 00:28:14.364958 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-02-27 00:28:14.365000 | orchestrator | Friday 27 February 2026 00:28:07 +0000 (0:00:01.150) 0:00:19.840 ******* 2026-02-27 00:28:14.365030 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:28:14.365039 | orchestrator | 2026-02-27 00:28:14.365046 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-02-27 00:28:14.365058 | orchestrator | Friday 27 February 2026 00:28:07 +0000 (0:00:00.333) 0:00:20.174 ******* 2026-02-27 00:28:14.365070 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:28:14.365081 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:28:14.365092 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:28:14.365104 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:28:14.365116 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:28:14.365128 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:28:14.365139 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:28:14.365151 | orchestrator | 2026-02-27 00:28:14.365162 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-27 00:28:14.365174 | orchestrator | Friday 27 February 2026 00:28:09 +0000 (0:00:01.309) 0:00:21.484 ******* 2026-02-27 00:28:14.365187 | orchestrator | ok: [testbed-manager] 2026-02-27 00:28:14.365199 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:28:14.365211 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:28:14.365219 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:28:14.365226 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:28:14.365233 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:28:14.365240 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:28:14.365247 | orchestrator | 2026-02-27 00:28:14.365255 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-27 00:28:14.365262 | orchestrator | Friday 27 February 2026 00:28:09 +0000 (0:00:00.260) 0:00:21.744 ******* 2026-02-27 00:28:14.365269 | orchestrator | ok: [testbed-manager] 2026-02-27 00:28:14.365276 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:28:14.365283 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:28:14.365290 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:28:14.365298 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:28:14.365305 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:28:14.365312 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:28:14.365319 | orchestrator | 2026-02-27 00:28:14.365326 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-27 00:28:14.365333 | orchestrator | Friday 27 February 2026 00:28:09 +0000 (0:00:00.283) 0:00:22.028 ******* 2026-02-27 00:28:14.365340 | orchestrator | ok: [testbed-manager] 2026-02-27 00:28:14.365347 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:28:14.365354 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:28:14.365361 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:28:14.365368 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:28:14.365376 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:28:14.365383 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:28:14.365390 | orchestrator | 2026-02-27 00:28:14.365397 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-27 00:28:14.365404 | orchestrator | Friday 27 February 2026 00:28:10 +0000 (0:00:00.255) 0:00:22.283 ******* 2026-02-27 00:28:14.365412 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:28:14.365422 | orchestrator | 2026-02-27 00:28:14.365429 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-27 00:28:14.365436 | orchestrator | Friday 27 February 2026 00:28:10 +0000 (0:00:00.340) 0:00:22.624 ******* 2026-02-27 00:28:14.365447 | orchestrator | ok: [testbed-manager] 2026-02-27 00:28:14.365459 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:28:14.365488 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:28:14.365503 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:28:14.365514 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:28:14.365525 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:28:14.365536 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:28:14.365547 | orchestrator | 2026-02-27 00:28:14.365558 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-27 00:28:14.365571 | orchestrator | Friday 27 February 2026 00:28:10 +0000 (0:00:00.585) 0:00:23.209 ******* 2026-02-27 00:28:14.365582 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:28:14.365594 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:28:14.365606 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:28:14.365619 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:28:14.365630 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:28:14.365642 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:28:14.365649 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:28:14.365657 | orchestrator | 2026-02-27 00:28:14.365664 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-27 00:28:14.365672 | orchestrator | Friday 27 February 2026 00:28:11 +0000 (0:00:00.269) 0:00:23.479 ******* 2026-02-27 00:28:14.365679 | orchestrator | ok: [testbed-manager] 2026-02-27 00:28:14.365686 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:28:14.365693 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:28:14.365700 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:28:14.365707 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:28:14.365715 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:28:14.365722 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:28:14.365729 | orchestrator | 2026-02-27 00:28:14.365736 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-27 00:28:14.365743 | orchestrator | Friday 27 February 2026 00:28:12 +0000 (0:00:01.172) 0:00:24.652 ******* 2026-02-27 00:28:14.365750 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:28:14.365757 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:28:14.365764 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:28:14.365772 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:28:14.365779 | orchestrator | ok: [testbed-manager] 2026-02-27 00:28:14.365786 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:28:14.365808 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:28:14.365825 | orchestrator | 2026-02-27 00:28:14.365837 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-27 00:28:14.365849 | orchestrator | Friday 27 February 2026 00:28:13 +0000 (0:00:00.634) 0:00:25.287 ******* 2026-02-27 00:28:14.365862 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:28:14.365874 | orchestrator | ok: [testbed-manager] 2026-02-27 00:28:14.365886 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:28:14.365897 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:28:14.365919 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:28:56.631300 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:28:56.631434 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:28:56.631463 | orchestrator | 2026-02-27 00:28:56.631485 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-27 00:28:56.631505 | orchestrator | Friday 27 February 2026 00:28:14 +0000 (0:00:01.304) 0:00:26.591 ******* 2026-02-27 00:28:56.631525 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:28:56.631544 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:28:56.631562 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:28:56.631574 | orchestrator | changed: [testbed-manager] 2026-02-27 00:28:56.631585 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:28:56.631597 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:28:56.631608 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:28:56.631619 | orchestrator | 2026-02-27 00:28:56.631630 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-02-27 00:28:56.631641 | orchestrator | Friday 27 February 2026 00:28:30 +0000 (0:00:15.737) 0:00:42.328 ******* 2026-02-27 00:28:56.631652 | orchestrator | ok: [testbed-manager] 2026-02-27 00:28:56.631684 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:28:56.631695 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:28:56.631706 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:28:56.631717 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:28:56.631727 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:28:56.631738 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:28:56.631748 | orchestrator | 2026-02-27 00:28:56.631759 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-02-27 00:28:56.631770 | orchestrator | Friday 27 February 2026 00:28:30 +0000 (0:00:00.282) 0:00:42.611 ******* 2026-02-27 00:28:56.631781 | orchestrator | ok: [testbed-manager] 2026-02-27 00:28:56.631791 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:28:56.631802 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:28:56.631812 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:28:56.631823 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:28:56.631834 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:28:56.631844 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:28:56.631855 | orchestrator | 2026-02-27 00:28:56.631865 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-02-27 00:28:56.631876 | orchestrator | Friday 27 February 2026 00:28:30 +0000 (0:00:00.252) 0:00:42.864 ******* 2026-02-27 00:28:56.631887 | orchestrator | ok: [testbed-manager] 2026-02-27 00:28:56.631897 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:28:56.631908 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:28:56.631919 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:28:56.631929 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:28:56.631940 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:28:56.631951 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:28:56.631962 | orchestrator | 2026-02-27 00:28:56.631972 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-02-27 00:28:56.631983 | orchestrator | Friday 27 February 2026 00:28:30 +0000 (0:00:00.257) 0:00:43.121 ******* 2026-02-27 00:28:56.632019 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:28:56.632034 | orchestrator | 2026-02-27 00:28:56.632045 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-02-27 00:28:56.632056 | orchestrator | Friday 27 February 2026 00:28:31 +0000 (0:00:00.345) 0:00:43.467 ******* 2026-02-27 00:28:56.632066 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:28:56.632077 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:28:56.632094 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:28:56.632113 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:28:56.632131 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:28:56.632152 | orchestrator | ok: [testbed-manager] 2026-02-27 00:28:56.632172 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:28:56.632190 | orchestrator | 2026-02-27 00:28:56.632210 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-02-27 00:28:56.632229 | orchestrator | Friday 27 February 2026 00:28:32 +0000 (0:00:01.758) 0:00:45.226 ******* 2026-02-27 00:28:56.632248 | orchestrator | changed: [testbed-manager] 2026-02-27 00:28:56.632264 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:28:56.632275 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:28:56.632286 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:28:56.632297 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:28:56.632308 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:28:56.632318 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:28:56.632329 | orchestrator | 2026-02-27 00:28:56.632340 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-02-27 00:28:56.632361 | orchestrator | Friday 27 February 2026 00:28:34 +0000 (0:00:01.110) 0:00:46.336 ******* 2026-02-27 00:28:56.632372 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:28:56.632383 | orchestrator | ok: [testbed-manager] 2026-02-27 00:28:56.632393 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:28:56.632412 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:28:56.632423 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:28:56.632434 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:28:56.632444 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:28:56.632455 | orchestrator | 2026-02-27 00:28:56.632466 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-02-27 00:28:56.632477 | orchestrator | Friday 27 February 2026 00:28:34 +0000 (0:00:00.817) 0:00:47.154 ******* 2026-02-27 00:28:56.632488 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:28:56.632501 | orchestrator | 2026-02-27 00:28:56.632513 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-02-27 00:28:56.632533 | orchestrator | Friday 27 February 2026 00:28:35 +0000 (0:00:00.365) 0:00:47.520 ******* 2026-02-27 00:28:56.632546 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:28:56.632557 | orchestrator | changed: [testbed-manager] 2026-02-27 00:28:56.632568 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:28:56.632579 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:28:56.632590 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:28:56.632600 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:28:56.632611 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:28:56.632622 | orchestrator | 2026-02-27 00:28:56.632666 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-02-27 00:28:56.632690 | orchestrator | Friday 27 February 2026 00:28:36 +0000 (0:00:01.062) 0:00:48.582 ******* 2026-02-27 00:28:56.632702 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:28:56.632712 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:28:56.632723 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:28:56.632734 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:28:56.632744 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:28:56.632755 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:28:56.632766 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:28:56.632776 | orchestrator | 2026-02-27 00:28:56.632787 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-02-27 00:28:56.632798 | orchestrator | Friday 27 February 2026 00:28:36 +0000 (0:00:00.249) 0:00:48.832 ******* 2026-02-27 00:28:56.632809 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:28:56.632821 | orchestrator | 2026-02-27 00:28:56.632831 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-02-27 00:28:56.632842 | orchestrator | Friday 27 February 2026 00:28:36 +0000 (0:00:00.382) 0:00:49.215 ******* 2026-02-27 00:28:56.632853 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:28:56.632871 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:28:56.632885 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:28:56.632903 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:28:56.632914 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:28:56.632924 | orchestrator | ok: [testbed-manager] 2026-02-27 00:28:56.632948 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:28:56.632960 | orchestrator | 2026-02-27 00:28:56.632970 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-02-27 00:28:56.633007 | orchestrator | Friday 27 February 2026 00:28:38 +0000 (0:00:01.688) 0:00:50.903 ******* 2026-02-27 00:28:56.633018 | orchestrator | changed: [testbed-manager] 2026-02-27 00:28:56.633030 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:28:56.633041 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:28:56.633051 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:28:56.633062 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:28:56.633073 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:28:56.633083 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:28:56.633101 | orchestrator | 2026-02-27 00:28:56.633113 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-02-27 00:28:56.633123 | orchestrator | Friday 27 February 2026 00:28:39 +0000 (0:00:01.150) 0:00:52.053 ******* 2026-02-27 00:28:56.633135 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:28:56.633145 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:28:56.633156 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:28:56.633167 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:28:56.633178 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:28:56.633189 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:28:56.633199 | orchestrator | changed: [testbed-manager] 2026-02-27 00:28:56.633210 | orchestrator | 2026-02-27 00:28:56.633222 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-02-27 00:28:56.633233 | orchestrator | Friday 27 February 2026 00:28:52 +0000 (0:00:13.132) 0:01:05.186 ******* 2026-02-27 00:28:56.633243 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:28:56.633254 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:28:56.633265 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:28:56.633275 | orchestrator | ok: [testbed-manager] 2026-02-27 00:28:56.633286 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:28:56.633297 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:28:56.633307 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:28:56.633318 | orchestrator | 2026-02-27 00:28:56.633329 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-02-27 00:28:56.633340 | orchestrator | Friday 27 February 2026 00:28:54 +0000 (0:00:01.766) 0:01:06.952 ******* 2026-02-27 00:28:56.633350 | orchestrator | ok: [testbed-manager] 2026-02-27 00:28:56.633361 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:28:56.633372 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:28:56.633383 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:28:56.633393 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:28:56.633404 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:28:56.633415 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:28:56.633425 | orchestrator | 2026-02-27 00:28:56.633436 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-02-27 00:28:56.633447 | orchestrator | Friday 27 February 2026 00:28:55 +0000 (0:00:01.016) 0:01:07.969 ******* 2026-02-27 00:28:56.633463 | orchestrator | ok: [testbed-manager] 2026-02-27 00:28:56.633502 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:28:56.633513 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:28:56.633524 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:28:56.633534 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:28:56.633545 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:28:56.633556 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:28:56.633567 | orchestrator | 2026-02-27 00:28:56.633577 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-02-27 00:28:56.633588 | orchestrator | Friday 27 February 2026 00:28:55 +0000 (0:00:00.263) 0:01:08.232 ******* 2026-02-27 00:28:56.633599 | orchestrator | ok: [testbed-manager] 2026-02-27 00:28:56.633610 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:28:56.633620 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:28:56.633631 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:28:56.633641 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:28:56.633652 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:28:56.633663 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:28:56.633673 | orchestrator | 2026-02-27 00:28:56.633684 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-02-27 00:28:56.633695 | orchestrator | Friday 27 February 2026 00:28:56 +0000 (0:00:00.273) 0:01:08.505 ******* 2026-02-27 00:28:56.633707 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:28:56.633718 | orchestrator | 2026-02-27 00:28:56.633737 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-02-27 00:31:49.287364 | orchestrator | Friday 27 February 2026 00:28:56 +0000 (0:00:00.357) 0:01:08.863 ******* 2026-02-27 00:31:49.287491 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:31:49.287517 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:31:49.287536 | orchestrator | ok: [testbed-manager] 2026-02-27 00:31:49.287552 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:31:49.287572 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:31:49.287590 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:31:49.287608 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:31:49.287628 | orchestrator | 2026-02-27 00:31:49.287647 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-02-27 00:31:49.287659 | orchestrator | Friday 27 February 2026 00:28:58 +0000 (0:00:01.635) 0:01:10.499 ******* 2026-02-27 00:31:49.287670 | orchestrator | changed: [testbed-manager] 2026-02-27 00:31:49.287683 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:31:49.287694 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:31:49.287705 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:31:49.287716 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:31:49.287727 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:31:49.287738 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:31:49.287748 | orchestrator | 2026-02-27 00:31:49.287759 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-02-27 00:31:49.287771 | orchestrator | Friday 27 February 2026 00:28:58 +0000 (0:00:00.565) 0:01:11.065 ******* 2026-02-27 00:31:49.287782 | orchestrator | ok: [testbed-manager] 2026-02-27 00:31:49.287793 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:31:49.287804 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:31:49.287815 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:31:49.287826 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:31:49.287836 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:31:49.287847 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:31:49.287858 | orchestrator | 2026-02-27 00:31:49.287870 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-02-27 00:31:49.287881 | orchestrator | Friday 27 February 2026 00:28:59 +0000 (0:00:00.253) 0:01:11.318 ******* 2026-02-27 00:31:49.287892 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:31:49.287903 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:31:49.287915 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:31:49.287928 | orchestrator | ok: [testbed-manager] 2026-02-27 00:31:49.287941 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:31:49.287954 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:31:49.287966 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:31:49.287978 | orchestrator | 2026-02-27 00:31:49.287992 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-02-27 00:31:49.288005 | orchestrator | Friday 27 February 2026 00:29:00 +0000 (0:00:01.073) 0:01:12.392 ******* 2026-02-27 00:31:49.288017 | orchestrator | changed: [testbed-manager] 2026-02-27 00:31:49.288078 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:31:49.288092 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:31:49.288104 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:31:49.288117 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:31:49.288129 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:31:49.288142 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:31:49.288154 | orchestrator | 2026-02-27 00:31:49.288171 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-02-27 00:31:49.288184 | orchestrator | Friday 27 February 2026 00:29:01 +0000 (0:00:01.610) 0:01:14.002 ******* 2026-02-27 00:31:49.288196 | orchestrator | ok: [testbed-manager] 2026-02-27 00:31:49.288209 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:31:49.288221 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:31:49.288234 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:31:49.288246 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:31:49.288258 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:31:49.288270 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:31:49.288281 | orchestrator | 2026-02-27 00:31:49.288309 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-02-27 00:31:49.288369 | orchestrator | Friday 27 February 2026 00:29:04 +0000 (0:00:02.389) 0:01:16.392 ******* 2026-02-27 00:31:49.288381 | orchestrator | ok: [testbed-manager] 2026-02-27 00:31:49.288393 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:31:49.288404 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:31:49.288415 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:31:49.288426 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:31:49.288437 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:31:49.288448 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:31:49.288458 | orchestrator | 2026-02-27 00:31:49.288470 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-02-27 00:31:49.288481 | orchestrator | Friday 27 February 2026 00:30:06 +0000 (0:01:02.810) 0:02:19.203 ******* 2026-02-27 00:31:49.288491 | orchestrator | changed: [testbed-manager] 2026-02-27 00:31:49.288503 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:31:49.288513 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:31:49.288524 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:31:49.288535 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:31:49.288546 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:31:49.288557 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:31:49.288568 | orchestrator | 2026-02-27 00:31:49.288579 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-02-27 00:31:49.288590 | orchestrator | Friday 27 February 2026 00:31:32 +0000 (0:01:25.572) 0:03:44.775 ******* 2026-02-27 00:31:49.288601 | orchestrator | ok: [testbed-manager] 2026-02-27 00:31:49.288612 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:31:49.288631 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:31:49.288650 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:31:49.288669 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:31:49.288688 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:31:49.288707 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:31:49.288725 | orchestrator | 2026-02-27 00:31:49.288744 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-02-27 00:31:49.288761 | orchestrator | Friday 27 February 2026 00:31:34 +0000 (0:00:01.929) 0:03:46.704 ******* 2026-02-27 00:31:49.288780 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:31:49.288799 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:31:49.288817 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:31:49.288834 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:31:49.288851 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:31:49.288868 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:31:49.288886 | orchestrator | changed: [testbed-manager] 2026-02-27 00:31:49.288903 | orchestrator | 2026-02-27 00:31:49.288920 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-02-27 00:31:49.288938 | orchestrator | Friday 27 February 2026 00:31:47 +0000 (0:00:13.509) 0:04:00.214 ******* 2026-02-27 00:31:49.289005 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-02-27 00:31:49.289083 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-02-27 00:31:49.289127 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-02-27 00:31:49.289150 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-02-27 00:31:49.289170 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-02-27 00:31:49.289190 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-02-27 00:31:49.289208 | orchestrator | 2026-02-27 00:31:49.289226 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-02-27 00:31:49.289246 | orchestrator | Friday 27 February 2026 00:31:48 +0000 (0:00:00.422) 0:04:00.636 ******* 2026-02-27 00:31:49.289266 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-27 00:31:49.289285 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:31:49.289305 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-27 00:31:49.289324 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-27 00:31:49.289345 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:31:49.289373 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-27 00:31:49.289395 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:31:49.289415 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:31:49.289436 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-27 00:31:49.289456 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-27 00:31:49.289475 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-27 00:31:49.289495 | orchestrator | 2026-02-27 00:31:49.289515 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-02-27 00:31:49.289535 | orchestrator | Friday 27 February 2026 00:31:49 +0000 (0:00:00.801) 0:04:01.437 ******* 2026-02-27 00:31:49.289555 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-27 00:31:49.289576 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-27 00:31:49.289597 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-27 00:31:49.289617 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-27 00:31:49.289636 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-27 00:31:49.289668 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-27 00:31:57.159667 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-27 00:31:57.159780 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-27 00:31:57.159815 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-27 00:31:57.159828 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-27 00:31:57.159838 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-27 00:31:57.159847 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-27 00:31:57.159857 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-27 00:31:57.159867 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-27 00:31:57.159877 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-27 00:31:57.159886 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-27 00:31:57.159896 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-27 00:31:57.159907 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-27 00:31:57.159916 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-27 00:31:57.159926 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:31:57.159938 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-27 00:31:57.159948 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-27 00:31:57.159957 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-27 00:31:57.159967 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-27 00:31:57.159977 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-27 00:31:57.159986 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-27 00:31:57.159995 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-27 00:31:57.160005 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-27 00:31:57.160014 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:31:57.160046 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-27 00:31:57.160056 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-27 00:31:57.160065 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-27 00:31:57.160075 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-27 00:31:57.160084 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-27 00:31:57.160094 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-27 00:31:57.160103 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:31:57.160113 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-27 00:31:57.160137 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-27 00:31:57.160147 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-27 00:31:57.160157 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-27 00:31:57.160166 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-27 00:31:57.160176 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-27 00:31:57.160194 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-27 00:31:57.160204 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:31:57.160213 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-27 00:31:57.160223 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-27 00:31:57.160233 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-27 00:31:57.160242 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-27 00:31:57.160252 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-27 00:31:57.160280 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-27 00:31:57.160291 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-27 00:31:57.160300 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-27 00:31:57.160310 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-27 00:31:57.160319 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-27 00:31:57.160329 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-27 00:31:57.160339 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-27 00:31:57.160348 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-27 00:31:57.160358 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-27 00:31:57.160367 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-27 00:31:57.160377 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-27 00:31:57.160386 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-27 00:31:57.160396 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-27 00:31:57.160503 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-27 00:31:57.160513 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-27 00:31:57.160523 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-27 00:31:57.160532 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-27 00:31:57.160542 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-27 00:31:57.160551 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-27 00:31:57.160561 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-27 00:31:57.160570 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-27 00:31:57.160580 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-27 00:31:57.160589 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-27 00:31:57.160599 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-27 00:31:57.160609 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-27 00:31:57.160626 | orchestrator | 2026-02-27 00:31:57.160638 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-02-27 00:31:57.160647 | orchestrator | Friday 27 February 2026 00:31:55 +0000 (0:00:05.853) 0:04:07.291 ******* 2026-02-27 00:31:57.160657 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-27 00:31:57.160666 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-27 00:31:57.160676 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-27 00:31:57.160685 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-27 00:31:57.160701 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-27 00:31:57.160711 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-27 00:31:57.160720 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-27 00:31:57.160730 | orchestrator | 2026-02-27 00:31:57.160739 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-02-27 00:31:57.160748 | orchestrator | Friday 27 February 2026 00:31:55 +0000 (0:00:00.596) 0:04:07.887 ******* 2026-02-27 00:31:57.160758 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-27 00:31:57.160769 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:31:57.160786 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-27 00:31:57.160802 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-27 00:31:57.160818 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:31:57.160834 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:31:57.160851 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-27 00:31:57.160868 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:31:57.160883 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-27 00:31:57.160894 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-27 00:31:57.160913 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-27 00:32:11.065932 | orchestrator | 2026-02-27 00:32:11.066200 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-02-27 00:32:11.066224 | orchestrator | Friday 27 February 2026 00:31:57 +0000 (0:00:01.496) 0:04:09.384 ******* 2026-02-27 00:32:11.066236 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-27 00:32:11.066249 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-27 00:32:11.066261 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:32:11.066286 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:32:11.066298 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-27 00:32:11.066310 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-27 00:32:11.066322 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:32:11.066333 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:32:11.066345 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-27 00:32:11.066356 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-27 00:32:11.066367 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-27 00:32:11.066378 | orchestrator | 2026-02-27 00:32:11.066389 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-02-27 00:32:11.066427 | orchestrator | Friday 27 February 2026 00:31:57 +0000 (0:00:00.630) 0:04:10.014 ******* 2026-02-27 00:32:11.066439 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-27 00:32:11.066453 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:32:11.066466 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-27 00:32:11.066479 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-27 00:32:11.066491 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:32:11.066503 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:32:11.066517 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-27 00:32:11.066530 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:32:11.066543 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-27 00:32:11.066556 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-27 00:32:11.066569 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-27 00:32:11.066582 | orchestrator | 2026-02-27 00:32:11.066595 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-02-27 00:32:11.066607 | orchestrator | Friday 27 February 2026 00:31:58 +0000 (0:00:00.613) 0:04:10.628 ******* 2026-02-27 00:32:11.066620 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:32:11.066633 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:32:11.066646 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:32:11.066658 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:32:11.066671 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:32:11.066683 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:32:11.066696 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:32:11.066708 | orchestrator | 2026-02-27 00:32:11.066721 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-02-27 00:32:11.066734 | orchestrator | Friday 27 February 2026 00:31:58 +0000 (0:00:00.351) 0:04:10.979 ******* 2026-02-27 00:32:11.066748 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:32:11.066761 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:32:11.066774 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:32:11.066787 | orchestrator | ok: [testbed-manager] 2026-02-27 00:32:11.066797 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:32:11.066808 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:32:11.066819 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:32:11.066830 | orchestrator | 2026-02-27 00:32:11.066841 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-02-27 00:32:11.066852 | orchestrator | Friday 27 February 2026 00:32:04 +0000 (0:00:06.051) 0:04:17.031 ******* 2026-02-27 00:32:11.066863 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-02-27 00:32:11.066874 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-02-27 00:32:11.066885 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:32:11.066896 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-02-27 00:32:11.066907 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:32:11.066918 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-02-27 00:32:11.066929 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:32:11.066939 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-02-27 00:32:11.066951 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:32:11.066962 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-02-27 00:32:11.066989 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:32:11.067001 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:32:11.067012 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-02-27 00:32:11.067057 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:32:11.067068 | orchestrator | 2026-02-27 00:32:11.067088 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-02-27 00:32:11.067099 | orchestrator | Friday 27 February 2026 00:32:05 +0000 (0:00:00.328) 0:04:17.359 ******* 2026-02-27 00:32:11.067110 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-02-27 00:32:11.067121 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-02-27 00:32:11.067133 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-02-27 00:32:11.067161 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-02-27 00:32:11.067173 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-02-27 00:32:11.067184 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-02-27 00:32:11.067195 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-02-27 00:32:11.067206 | orchestrator | 2026-02-27 00:32:11.067308 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-02-27 00:32:11.067321 | orchestrator | Friday 27 February 2026 00:32:06 +0000 (0:00:01.205) 0:04:18.565 ******* 2026-02-27 00:32:11.067334 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:32:11.067348 | orchestrator | 2026-02-27 00:32:11.067359 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-02-27 00:32:11.067370 | orchestrator | Friday 27 February 2026 00:32:06 +0000 (0:00:00.463) 0:04:19.028 ******* 2026-02-27 00:32:11.067381 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:32:11.067392 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:32:11.067403 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:32:11.067414 | orchestrator | ok: [testbed-manager] 2026-02-27 00:32:11.067425 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:32:11.067435 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:32:11.067446 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:32:11.067457 | orchestrator | 2026-02-27 00:32:11.067467 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-02-27 00:32:11.067478 | orchestrator | Friday 27 February 2026 00:32:08 +0000 (0:00:01.237) 0:04:20.266 ******* 2026-02-27 00:32:11.067489 | orchestrator | ok: [testbed-manager] 2026-02-27 00:32:11.067500 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:32:11.067510 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:32:11.067521 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:32:11.067532 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:32:11.067542 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:32:11.067553 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:32:11.067564 | orchestrator | 2026-02-27 00:32:11.067575 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-02-27 00:32:11.067586 | orchestrator | Friday 27 February 2026 00:32:08 +0000 (0:00:00.625) 0:04:20.891 ******* 2026-02-27 00:32:11.067597 | orchestrator | changed: [testbed-manager] 2026-02-27 00:32:11.067608 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:32:11.067619 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:32:11.067630 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:32:11.067640 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:32:11.067651 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:32:11.067662 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:32:11.067673 | orchestrator | 2026-02-27 00:32:11.067683 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-02-27 00:32:11.067694 | orchestrator | Friday 27 February 2026 00:32:09 +0000 (0:00:00.659) 0:04:21.551 ******* 2026-02-27 00:32:11.067705 | orchestrator | ok: [testbed-manager] 2026-02-27 00:32:11.067716 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:32:11.067727 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:32:11.067737 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:32:11.067748 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:32:11.067759 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:32:11.067770 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:32:11.067780 | orchestrator | 2026-02-27 00:32:11.067791 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-02-27 00:32:11.067811 | orchestrator | Friday 27 February 2026 00:32:09 +0000 (0:00:00.611) 0:04:22.163 ******* 2026-02-27 00:32:11.067833 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772150718.8407402, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-27 00:32:11.067849 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772150736.2083313, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-27 00:32:11.067861 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772150732.9065406, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-27 00:32:11.067896 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772150739.2994432, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-27 00:32:16.147503 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772150740.608907, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-27 00:32:16.147643 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772150744.1267717, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-27 00:32:16.147671 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1772150733.5711927, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-27 00:32:16.147721 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-27 00:32:16.147753 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-27 00:32:16.147765 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-27 00:32:16.147777 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-27 00:32:16.147818 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-27 00:32:16.147831 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-27 00:32:16.147843 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-27 00:32:16.147863 | orchestrator | 2026-02-27 00:32:16.147876 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-02-27 00:32:16.147889 | orchestrator | Friday 27 February 2026 00:32:11 +0000 (0:00:01.133) 0:04:23.296 ******* 2026-02-27 00:32:16.147900 | orchestrator | changed: [testbed-manager] 2026-02-27 00:32:16.147914 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:32:16.147925 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:32:16.147936 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:32:16.147948 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:32:16.147959 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:32:16.147970 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:32:16.147981 | orchestrator | 2026-02-27 00:32:16.147992 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-02-27 00:32:16.148003 | orchestrator | Friday 27 February 2026 00:32:12 +0000 (0:00:01.160) 0:04:24.457 ******* 2026-02-27 00:32:16.148016 | orchestrator | changed: [testbed-manager] 2026-02-27 00:32:16.148064 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:32:16.148076 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:32:16.148089 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:32:16.148101 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:32:16.148119 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:32:16.148138 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:32:16.148157 | orchestrator | 2026-02-27 00:32:16.148183 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-02-27 00:32:16.148202 | orchestrator | Friday 27 February 2026 00:32:13 +0000 (0:00:01.190) 0:04:25.648 ******* 2026-02-27 00:32:16.148221 | orchestrator | changed: [testbed-manager] 2026-02-27 00:32:16.148240 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:32:16.148259 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:32:16.148278 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:32:16.148291 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:32:16.148303 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:32:16.148316 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:32:16.148328 | orchestrator | 2026-02-27 00:32:16.148341 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-02-27 00:32:16.148354 | orchestrator | Friday 27 February 2026 00:32:14 +0000 (0:00:01.258) 0:04:26.906 ******* 2026-02-27 00:32:16.148365 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:32:16.148376 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:32:16.148386 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:32:16.148397 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:32:16.148407 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:32:16.148418 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:32:16.148428 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:32:16.148439 | orchestrator | 2026-02-27 00:32:16.148450 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-02-27 00:32:16.148460 | orchestrator | Friday 27 February 2026 00:32:14 +0000 (0:00:00.275) 0:04:27.182 ******* 2026-02-27 00:32:16.148471 | orchestrator | ok: [testbed-manager] 2026-02-27 00:32:16.148483 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:32:16.148494 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:32:16.148504 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:32:16.148515 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:32:16.148526 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:32:16.148537 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:32:16.148547 | orchestrator | 2026-02-27 00:32:16.148558 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-02-27 00:32:16.148569 | orchestrator | Friday 27 February 2026 00:32:15 +0000 (0:00:00.796) 0:04:27.978 ******* 2026-02-27 00:32:16.148582 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:32:16.148603 | orchestrator | 2026-02-27 00:32:16.148615 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-02-27 00:32:16.148635 | orchestrator | Friday 27 February 2026 00:32:16 +0000 (0:00:00.403) 0:04:28.382 ******* 2026-02-27 00:33:35.310915 | orchestrator | ok: [testbed-manager] 2026-02-27 00:33:35.311091 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:33:35.311113 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:33:35.311125 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:33:35.311136 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:33:35.311148 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:33:35.311159 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:33:35.311170 | orchestrator | 2026-02-27 00:33:35.311183 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-02-27 00:33:35.311195 | orchestrator | Friday 27 February 2026 00:32:24 +0000 (0:00:08.333) 0:04:36.715 ******* 2026-02-27 00:33:35.311206 | orchestrator | ok: [testbed-manager] 2026-02-27 00:33:35.311217 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:33:35.311228 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:33:35.311240 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:33:35.311251 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:33:35.311262 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:33:35.311273 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:33:35.311283 | orchestrator | 2026-02-27 00:33:35.311295 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-02-27 00:33:35.311305 | orchestrator | Friday 27 February 2026 00:32:25 +0000 (0:00:01.227) 0:04:37.943 ******* 2026-02-27 00:33:35.311316 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:33:35.311327 | orchestrator | ok: [testbed-manager] 2026-02-27 00:33:35.311338 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:33:35.311349 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:33:35.311359 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:33:35.311370 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:33:35.311381 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:33:35.311392 | orchestrator | 2026-02-27 00:33:35.311403 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-02-27 00:33:35.311414 | orchestrator | Friday 27 February 2026 00:32:27 +0000 (0:00:02.097) 0:04:40.040 ******* 2026-02-27 00:33:35.311425 | orchestrator | ok: [testbed-manager] 2026-02-27 00:33:35.311435 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:33:35.311446 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:33:35.311459 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:33:35.311473 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:33:35.311486 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:33:35.311498 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:33:35.311510 | orchestrator | 2026-02-27 00:33:35.311523 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-02-27 00:33:35.311550 | orchestrator | Friday 27 February 2026 00:32:28 +0000 (0:00:00.295) 0:04:40.336 ******* 2026-02-27 00:33:35.311563 | orchestrator | ok: [testbed-manager] 2026-02-27 00:33:35.311576 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:33:35.311588 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:33:35.311600 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:33:35.311612 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:33:35.311625 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:33:35.311642 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:33:35.311661 | orchestrator | 2026-02-27 00:33:35.311677 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-02-27 00:33:35.311689 | orchestrator | Friday 27 February 2026 00:32:28 +0000 (0:00:00.299) 0:04:40.636 ******* 2026-02-27 00:33:35.311703 | orchestrator | ok: [testbed-manager] 2026-02-27 00:33:35.311716 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:33:35.311729 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:33:35.311768 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:33:35.311782 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:33:35.311795 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:33:35.311807 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:33:35.311819 | orchestrator | 2026-02-27 00:33:35.311830 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-02-27 00:33:35.311841 | orchestrator | Friday 27 February 2026 00:32:28 +0000 (0:00:00.304) 0:04:40.940 ******* 2026-02-27 00:33:35.311852 | orchestrator | ok: [testbed-manager] 2026-02-27 00:33:35.311863 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:33:35.311874 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:33:35.311884 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:33:35.311895 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:33:35.311906 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:33:35.311917 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:33:35.311928 | orchestrator | 2026-02-27 00:33:35.311939 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-02-27 00:33:35.311950 | orchestrator | Friday 27 February 2026 00:32:34 +0000 (0:00:05.655) 0:04:46.595 ******* 2026-02-27 00:33:35.311963 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:33:35.311976 | orchestrator | 2026-02-27 00:33:35.311987 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-02-27 00:33:35.312019 | orchestrator | Friday 27 February 2026 00:32:34 +0000 (0:00:00.451) 0:04:47.047 ******* 2026-02-27 00:33:35.312031 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-02-27 00:33:35.312042 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-02-27 00:33:35.312053 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-02-27 00:33:35.312064 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:33:35.312075 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-02-27 00:33:35.312103 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-02-27 00:33:35.312115 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-02-27 00:33:35.312125 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:33:35.312136 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-02-27 00:33:35.312147 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-02-27 00:33:35.312158 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:33:35.312169 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-02-27 00:33:35.312179 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:33:35.312190 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-02-27 00:33:35.312202 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-02-27 00:33:35.312213 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:33:35.312241 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-02-27 00:33:35.312253 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:33:35.312264 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-02-27 00:33:35.312275 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-02-27 00:33:35.312286 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:33:35.312297 | orchestrator | 2026-02-27 00:33:35.312307 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-02-27 00:33:35.312318 | orchestrator | Friday 27 February 2026 00:32:35 +0000 (0:00:00.371) 0:04:47.419 ******* 2026-02-27 00:33:35.312330 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:33:35.312341 | orchestrator | 2026-02-27 00:33:35.312352 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-02-27 00:33:35.312371 | orchestrator | Friday 27 February 2026 00:32:35 +0000 (0:00:00.416) 0:04:47.835 ******* 2026-02-27 00:33:35.312382 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-02-27 00:33:35.312393 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-02-27 00:33:35.312404 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:33:35.312415 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-02-27 00:33:35.312426 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:33:35.312436 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:33:35.312447 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-02-27 00:33:35.312458 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-02-27 00:33:35.312469 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:33:35.312480 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-02-27 00:33:35.312491 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:33:35.312501 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:33:35.312512 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-02-27 00:33:35.312523 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:33:35.312534 | orchestrator | 2026-02-27 00:33:35.312544 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-02-27 00:33:35.312555 | orchestrator | Friday 27 February 2026 00:32:35 +0000 (0:00:00.363) 0:04:48.199 ******* 2026-02-27 00:33:35.312566 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:33:35.312577 | orchestrator | 2026-02-27 00:33:35.312588 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-02-27 00:33:35.312599 | orchestrator | Friday 27 February 2026 00:32:36 +0000 (0:00:00.429) 0:04:48.628 ******* 2026-02-27 00:33:35.312609 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:33:35.312620 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:33:35.312631 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:33:35.312642 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:33:35.312658 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:33:35.312669 | orchestrator | changed: [testbed-manager] 2026-02-27 00:33:35.312680 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:33:35.312691 | orchestrator | 2026-02-27 00:33:35.312701 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-02-27 00:33:35.312712 | orchestrator | Friday 27 February 2026 00:33:11 +0000 (0:00:35.424) 0:05:24.053 ******* 2026-02-27 00:33:35.312723 | orchestrator | changed: [testbed-manager] 2026-02-27 00:33:35.312733 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:33:35.312744 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:33:35.312755 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:33:35.312766 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:33:35.312777 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:33:35.312787 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:33:35.312798 | orchestrator | 2026-02-27 00:33:35.312809 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-02-27 00:33:35.312820 | orchestrator | Friday 27 February 2026 00:33:20 +0000 (0:00:08.191) 0:05:32.244 ******* 2026-02-27 00:33:35.312830 | orchestrator | changed: [testbed-manager] 2026-02-27 00:33:35.312841 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:33:35.312852 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:33:35.312863 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:33:35.312873 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:33:35.312884 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:33:35.312895 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:33:35.312906 | orchestrator | 2026-02-27 00:33:35.312917 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-02-27 00:33:35.312934 | orchestrator | Friday 27 February 2026 00:33:27 +0000 (0:00:07.644) 0:05:39.889 ******* 2026-02-27 00:33:35.312945 | orchestrator | ok: [testbed-manager] 2026-02-27 00:33:35.312956 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:33:35.312967 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:33:35.312978 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:33:35.312989 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:33:35.313017 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:33:35.313029 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:33:35.313040 | orchestrator | 2026-02-27 00:33:35.313051 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-02-27 00:33:35.313062 | orchestrator | Friday 27 February 2026 00:33:29 +0000 (0:00:01.784) 0:05:41.673 ******* 2026-02-27 00:33:35.313072 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:33:35.313083 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:33:35.313094 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:33:35.313105 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:33:35.313116 | orchestrator | changed: [testbed-manager] 2026-02-27 00:33:35.313126 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:33:35.313138 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:33:35.313148 | orchestrator | 2026-02-27 00:33:35.313165 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-02-27 00:33:47.752161 | orchestrator | Friday 27 February 2026 00:33:35 +0000 (0:00:05.864) 0:05:47.538 ******* 2026-02-27 00:33:47.752281 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:33:47.752304 | orchestrator | 2026-02-27 00:33:47.752317 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-02-27 00:33:47.752326 | orchestrator | Friday 27 February 2026 00:33:35 +0000 (0:00:00.408) 0:05:47.946 ******* 2026-02-27 00:33:47.752335 | orchestrator | changed: [testbed-manager] 2026-02-27 00:33:47.752346 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:33:47.752354 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:33:47.752362 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:33:47.752370 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:33:47.752378 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:33:47.752386 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:33:47.752394 | orchestrator | 2026-02-27 00:33:47.752402 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-02-27 00:33:47.752410 | orchestrator | Friday 27 February 2026 00:33:36 +0000 (0:00:00.740) 0:05:48.687 ******* 2026-02-27 00:33:47.752419 | orchestrator | ok: [testbed-manager] 2026-02-27 00:33:47.752428 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:33:47.752436 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:33:47.752444 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:33:47.752452 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:33:47.752460 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:33:47.752467 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:33:47.752475 | orchestrator | 2026-02-27 00:33:47.752483 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-02-27 00:33:47.752491 | orchestrator | Friday 27 February 2026 00:33:38 +0000 (0:00:01.670) 0:05:50.358 ******* 2026-02-27 00:33:47.752499 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:33:47.752507 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:33:47.752515 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:33:47.752523 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:33:47.752531 | orchestrator | changed: [testbed-manager] 2026-02-27 00:33:47.752539 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:33:47.752547 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:33:47.752555 | orchestrator | 2026-02-27 00:33:47.752563 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-02-27 00:33:47.752571 | orchestrator | Friday 27 February 2026 00:33:39 +0000 (0:00:01.820) 0:05:52.178 ******* 2026-02-27 00:33:47.752602 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:33:47.752611 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:33:47.752618 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:33:47.752626 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:33:47.752634 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:33:47.752642 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:33:47.752649 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:33:47.752657 | orchestrator | 2026-02-27 00:33:47.752665 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-02-27 00:33:47.752673 | orchestrator | Friday 27 February 2026 00:33:40 +0000 (0:00:00.300) 0:05:52.479 ******* 2026-02-27 00:33:47.752681 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:33:47.752688 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:33:47.752698 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:33:47.752720 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:33:47.752730 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:33:47.752740 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:33:47.752748 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:33:47.752757 | orchestrator | 2026-02-27 00:33:47.752767 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-02-27 00:33:47.752780 | orchestrator | Friday 27 February 2026 00:33:40 +0000 (0:00:00.426) 0:05:52.905 ******* 2026-02-27 00:33:47.752794 | orchestrator | ok: [testbed-manager] 2026-02-27 00:33:47.752809 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:33:47.752823 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:33:47.752838 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:33:47.752849 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:33:47.752858 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:33:47.752866 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:33:47.752875 | orchestrator | 2026-02-27 00:33:47.752884 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-02-27 00:33:47.752894 | orchestrator | Friday 27 February 2026 00:33:40 +0000 (0:00:00.318) 0:05:53.223 ******* 2026-02-27 00:33:47.752903 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:33:47.752912 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:33:47.752921 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:33:47.752930 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:33:47.752938 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:33:47.752948 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:33:47.752957 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:33:47.752965 | orchestrator | 2026-02-27 00:33:47.752975 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-02-27 00:33:47.752985 | orchestrator | Friday 27 February 2026 00:33:41 +0000 (0:00:00.277) 0:05:53.501 ******* 2026-02-27 00:33:47.753073 | orchestrator | ok: [testbed-manager] 2026-02-27 00:33:47.753089 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:33:47.753104 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:33:47.753118 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:33:47.753129 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:33:47.753137 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:33:47.753145 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:33:47.753153 | orchestrator | 2026-02-27 00:33:47.753161 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-02-27 00:33:47.753169 | orchestrator | Friday 27 February 2026 00:33:41 +0000 (0:00:00.347) 0:05:53.848 ******* 2026-02-27 00:33:47.753177 | orchestrator | ok: [testbed-manager] =>  2026-02-27 00:33:47.753185 | orchestrator |  docker_version: 5:27.5.1 2026-02-27 00:33:47.753193 | orchestrator | ok: [testbed-node-3] =>  2026-02-27 00:33:47.753200 | orchestrator |  docker_version: 5:27.5.1 2026-02-27 00:33:47.753208 | orchestrator | ok: [testbed-node-4] =>  2026-02-27 00:33:47.753216 | orchestrator |  docker_version: 5:27.5.1 2026-02-27 00:33:47.753224 | orchestrator | ok: [testbed-node-5] =>  2026-02-27 00:33:47.753232 | orchestrator |  docker_version: 5:27.5.1 2026-02-27 00:33:47.753256 | orchestrator | ok: [testbed-node-0] =>  2026-02-27 00:33:47.753273 | orchestrator |  docker_version: 5:27.5.1 2026-02-27 00:33:47.753281 | orchestrator | ok: [testbed-node-1] =>  2026-02-27 00:33:47.753289 | orchestrator |  docker_version: 5:27.5.1 2026-02-27 00:33:47.753297 | orchestrator | ok: [testbed-node-2] =>  2026-02-27 00:33:47.753305 | orchestrator |  docker_version: 5:27.5.1 2026-02-27 00:33:47.753319 | orchestrator | 2026-02-27 00:33:47.753333 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-02-27 00:33:47.753347 | orchestrator | Friday 27 February 2026 00:33:41 +0000 (0:00:00.280) 0:05:54.129 ******* 2026-02-27 00:33:47.753359 | orchestrator | ok: [testbed-manager] =>  2026-02-27 00:33:47.753367 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-27 00:33:47.753375 | orchestrator | ok: [testbed-node-3] =>  2026-02-27 00:33:47.753383 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-27 00:33:47.753391 | orchestrator | ok: [testbed-node-4] =>  2026-02-27 00:33:47.753399 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-27 00:33:47.753406 | orchestrator | ok: [testbed-node-5] =>  2026-02-27 00:33:47.753414 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-27 00:33:47.753422 | orchestrator | ok: [testbed-node-0] =>  2026-02-27 00:33:47.753430 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-27 00:33:47.753437 | orchestrator | ok: [testbed-node-1] =>  2026-02-27 00:33:47.753445 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-27 00:33:47.753453 | orchestrator | ok: [testbed-node-2] =>  2026-02-27 00:33:47.753461 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-27 00:33:47.753469 | orchestrator | 2026-02-27 00:33:47.753477 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-02-27 00:33:47.753485 | orchestrator | Friday 27 February 2026 00:33:42 +0000 (0:00:00.336) 0:05:54.466 ******* 2026-02-27 00:33:47.753492 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:33:47.753500 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:33:47.753508 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:33:47.753516 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:33:47.753524 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:33:47.753531 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:33:47.753539 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:33:47.753547 | orchestrator | 2026-02-27 00:33:47.753555 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-02-27 00:33:47.753562 | orchestrator | Friday 27 February 2026 00:33:42 +0000 (0:00:00.275) 0:05:54.741 ******* 2026-02-27 00:33:47.753570 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:33:47.753581 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:33:47.753595 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:33:47.753603 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:33:47.753611 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:33:47.753619 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:33:47.753627 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:33:47.753635 | orchestrator | 2026-02-27 00:33:47.753643 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-02-27 00:33:47.753657 | orchestrator | Friday 27 February 2026 00:33:42 +0000 (0:00:00.286) 0:05:55.028 ******* 2026-02-27 00:33:47.753672 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:33:47.753686 | orchestrator | 2026-02-27 00:33:47.753706 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-02-27 00:33:47.753719 | orchestrator | Friday 27 February 2026 00:33:43 +0000 (0:00:00.469) 0:05:55.498 ******* 2026-02-27 00:33:47.753731 | orchestrator | ok: [testbed-manager] 2026-02-27 00:33:47.753742 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:33:47.753756 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:33:47.753768 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:33:47.753780 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:33:47.753801 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:33:47.753814 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:33:47.753826 | orchestrator | 2026-02-27 00:33:47.753839 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-02-27 00:33:47.753854 | orchestrator | Friday 27 February 2026 00:33:44 +0000 (0:00:01.004) 0:05:56.502 ******* 2026-02-27 00:33:47.753867 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:33:47.753881 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:33:47.753891 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:33:47.753899 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:33:47.753906 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:33:47.753914 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:33:47.753922 | orchestrator | ok: [testbed-manager] 2026-02-27 00:33:47.753930 | orchestrator | 2026-02-27 00:33:47.753938 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-02-27 00:33:47.753947 | orchestrator | Friday 27 February 2026 00:33:47 +0000 (0:00:03.075) 0:05:59.578 ******* 2026-02-27 00:33:47.753955 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-02-27 00:33:47.753963 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-02-27 00:33:47.753971 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-02-27 00:33:47.753978 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-02-27 00:33:47.753986 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-02-27 00:33:47.754068 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-02-27 00:33:47.754080 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:33:47.754088 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-02-27 00:33:47.754096 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-02-27 00:33:47.754104 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-02-27 00:33:47.754111 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:33:47.754120 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-02-27 00:33:47.754127 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-02-27 00:33:47.754135 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-02-27 00:33:47.754143 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:33:47.754151 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-02-27 00:33:47.754170 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-02-27 00:34:47.214806 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-02-27 00:34:47.214895 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:34:47.214906 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-02-27 00:34:47.214913 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-02-27 00:34:47.214920 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-02-27 00:34:47.214926 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:34:47.214932 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:34:47.214938 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-02-27 00:34:47.214944 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-02-27 00:34:47.214950 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-02-27 00:34:47.214956 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:34:47.214962 | orchestrator | 2026-02-27 00:34:47.214991 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-02-27 00:34:47.214999 | orchestrator | Friday 27 February 2026 00:33:47 +0000 (0:00:00.651) 0:06:00.230 ******* 2026-02-27 00:34:47.215005 | orchestrator | ok: [testbed-manager] 2026-02-27 00:34:47.215011 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:34:47.215017 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:34:47.215023 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:34:47.215030 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:34:47.215035 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:34:47.215063 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:34:47.215069 | orchestrator | 2026-02-27 00:34:47.215075 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-02-27 00:34:47.215081 | orchestrator | Friday 27 February 2026 00:33:54 +0000 (0:00:06.371) 0:06:06.601 ******* 2026-02-27 00:34:47.215087 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:34:47.215093 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:34:47.215098 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:34:47.215104 | orchestrator | ok: [testbed-manager] 2026-02-27 00:34:47.215110 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:34:47.215116 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:34:47.215122 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:34:47.215128 | orchestrator | 2026-02-27 00:34:47.215133 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-02-27 00:34:47.215139 | orchestrator | Friday 27 February 2026 00:33:55 +0000 (0:00:01.056) 0:06:07.658 ******* 2026-02-27 00:34:47.215145 | orchestrator | ok: [testbed-manager] 2026-02-27 00:34:47.215151 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:34:47.215156 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:34:47.215162 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:34:47.215168 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:34:47.215173 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:34:47.215179 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:34:47.215185 | orchestrator | 2026-02-27 00:34:47.215190 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-02-27 00:34:47.215196 | orchestrator | Friday 27 February 2026 00:34:03 +0000 (0:00:08.080) 0:06:15.738 ******* 2026-02-27 00:34:47.215202 | orchestrator | changed: [testbed-manager] 2026-02-27 00:34:47.215208 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:34:47.215213 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:34:47.215219 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:34:47.215225 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:34:47.215231 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:34:47.215236 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:34:47.215242 | orchestrator | 2026-02-27 00:34:47.215248 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-02-27 00:34:47.215254 | orchestrator | Friday 27 February 2026 00:34:06 +0000 (0:00:03.298) 0:06:19.036 ******* 2026-02-27 00:34:47.215260 | orchestrator | ok: [testbed-manager] 2026-02-27 00:34:47.215266 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:34:47.215271 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:34:47.215277 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:34:47.215283 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:34:47.215289 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:34:47.215294 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:34:47.215300 | orchestrator | 2026-02-27 00:34:47.215306 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-02-27 00:34:47.215311 | orchestrator | Friday 27 February 2026 00:34:08 +0000 (0:00:01.409) 0:06:20.446 ******* 2026-02-27 00:34:47.215317 | orchestrator | ok: [testbed-manager] 2026-02-27 00:34:47.215323 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:34:47.215328 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:34:47.215334 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:34:47.215340 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:34:47.215345 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:34:47.215351 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:34:47.215357 | orchestrator | 2026-02-27 00:34:47.215364 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-02-27 00:34:47.215371 | orchestrator | Friday 27 February 2026 00:34:09 +0000 (0:00:01.530) 0:06:21.977 ******* 2026-02-27 00:34:47.215377 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:34:47.215384 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:34:47.215390 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:34:47.215397 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:34:47.215408 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:34:47.215415 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:34:47.215422 | orchestrator | changed: [testbed-manager] 2026-02-27 00:34:47.215428 | orchestrator | 2026-02-27 00:34:47.215435 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-02-27 00:34:47.215442 | orchestrator | Friday 27 February 2026 00:34:10 +0000 (0:00:00.616) 0:06:22.593 ******* 2026-02-27 00:34:47.215448 | orchestrator | ok: [testbed-manager] 2026-02-27 00:34:47.215455 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:34:47.215462 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:34:47.215469 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:34:47.215484 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:34:47.215499 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:34:47.215505 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:34:47.215512 | orchestrator | 2026-02-27 00:34:47.215519 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-02-27 00:34:47.215538 | orchestrator | Friday 27 February 2026 00:34:19 +0000 (0:00:08.742) 0:06:31.335 ******* 2026-02-27 00:34:47.215545 | orchestrator | changed: [testbed-manager] 2026-02-27 00:34:47.215551 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:34:47.215558 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:34:47.215564 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:34:47.215571 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:34:47.215577 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:34:47.215583 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:34:47.215590 | orchestrator | 2026-02-27 00:34:47.215597 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-02-27 00:34:47.215603 | orchestrator | Friday 27 February 2026 00:34:20 +0000 (0:00:01.033) 0:06:32.369 ******* 2026-02-27 00:34:47.215610 | orchestrator | ok: [testbed-manager] 2026-02-27 00:34:47.215616 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:34:47.215622 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:34:47.215629 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:34:47.215636 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:34:47.215642 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:34:47.215649 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:34:47.215656 | orchestrator | 2026-02-27 00:34:47.215662 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-02-27 00:34:47.215669 | orchestrator | Friday 27 February 2026 00:34:29 +0000 (0:00:09.290) 0:06:41.660 ******* 2026-02-27 00:34:47.215675 | orchestrator | ok: [testbed-manager] 2026-02-27 00:34:47.215682 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:34:47.215688 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:34:47.215695 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:34:47.215701 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:34:47.215708 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:34:47.215714 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:34:47.215721 | orchestrator | 2026-02-27 00:34:47.215728 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-02-27 00:34:47.215734 | orchestrator | Friday 27 February 2026 00:34:40 +0000 (0:00:10.974) 0:06:52.634 ******* 2026-02-27 00:34:47.215741 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-02-27 00:34:47.215748 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-02-27 00:34:47.215754 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-02-27 00:34:47.215760 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-02-27 00:34:47.215765 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-02-27 00:34:47.215771 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-02-27 00:34:47.215777 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-02-27 00:34:47.215783 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-02-27 00:34:47.215788 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-02-27 00:34:47.215802 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-02-27 00:34:47.215808 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-02-27 00:34:47.215849 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-02-27 00:34:47.215855 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-02-27 00:34:47.215861 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-02-27 00:34:47.215867 | orchestrator | 2026-02-27 00:34:47.215873 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-02-27 00:34:47.215879 | orchestrator | Friday 27 February 2026 00:34:41 +0000 (0:00:01.205) 0:06:53.840 ******* 2026-02-27 00:34:47.215887 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:34:47.215893 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:34:47.215899 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:34:47.215905 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:34:47.215911 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:34:47.215916 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:34:47.215922 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:34:47.215928 | orchestrator | 2026-02-27 00:34:47.215934 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-02-27 00:34:47.215940 | orchestrator | Friday 27 February 2026 00:34:42 +0000 (0:00:00.554) 0:06:54.395 ******* 2026-02-27 00:34:47.215946 | orchestrator | ok: [testbed-manager] 2026-02-27 00:34:47.215951 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:34:47.215957 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:34:47.215963 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:34:47.216004 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:34:47.216011 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:34:47.216017 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:34:47.216023 | orchestrator | 2026-02-27 00:34:47.216029 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-02-27 00:34:47.216036 | orchestrator | Friday 27 February 2026 00:34:46 +0000 (0:00:04.074) 0:06:58.470 ******* 2026-02-27 00:34:47.216041 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:34:47.216047 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:34:47.216053 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:34:47.216059 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:34:47.216064 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:34:47.216070 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:34:47.216076 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:34:47.216082 | orchestrator | 2026-02-27 00:34:47.216088 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-02-27 00:34:47.216094 | orchestrator | Friday 27 February 2026 00:34:46 +0000 (0:00:00.515) 0:06:58.985 ******* 2026-02-27 00:34:47.216100 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-02-27 00:34:47.216106 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-02-27 00:34:47.216111 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:34:47.216117 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-02-27 00:34:47.216123 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-02-27 00:34:47.216129 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:34:47.216134 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-02-27 00:34:47.216140 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-02-27 00:34:47.216146 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:34:47.216157 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-02-27 00:35:06.942767 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-02-27 00:35:06.942880 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:35:06.942900 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-02-27 00:35:06.942911 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-02-27 00:35:06.942921 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:35:06.943015 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-02-27 00:35:06.943029 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-02-27 00:35:06.943038 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:35:06.943048 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-02-27 00:35:06.943056 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-02-27 00:35:06.943065 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:35:06.943075 | orchestrator | 2026-02-27 00:35:06.943088 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-02-27 00:35:06.943100 | orchestrator | Friday 27 February 2026 00:34:47 +0000 (0:00:00.776) 0:06:59.762 ******* 2026-02-27 00:35:06.943109 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:35:06.943119 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:35:06.943128 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:35:06.943136 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:35:06.943144 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:35:06.943152 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:35:06.943160 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:35:06.943169 | orchestrator | 2026-02-27 00:35:06.943177 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-02-27 00:35:06.943186 | orchestrator | Friday 27 February 2026 00:34:48 +0000 (0:00:00.549) 0:07:00.312 ******* 2026-02-27 00:35:06.943194 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:35:06.943202 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:35:06.943209 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:35:06.943217 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:35:06.943225 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:35:06.943232 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:35:06.943239 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:35:06.943246 | orchestrator | 2026-02-27 00:35:06.943253 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-02-27 00:35:06.943261 | orchestrator | Friday 27 February 2026 00:34:48 +0000 (0:00:00.539) 0:07:00.851 ******* 2026-02-27 00:35:06.943269 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:35:06.943277 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:35:06.943285 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:35:06.943293 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:35:06.943301 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:35:06.943309 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:35:06.943317 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:35:06.943325 | orchestrator | 2026-02-27 00:35:06.943333 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-02-27 00:35:06.943341 | orchestrator | Friday 27 February 2026 00:34:49 +0000 (0:00:00.561) 0:07:01.413 ******* 2026-02-27 00:35:06.943350 | orchestrator | ok: [testbed-manager] 2026-02-27 00:35:06.943357 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:35:06.943366 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:35:06.943374 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:35:06.943383 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:35:06.943392 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:35:06.943400 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:35:06.943408 | orchestrator | 2026-02-27 00:35:06.943416 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-02-27 00:35:06.943424 | orchestrator | Friday 27 February 2026 00:34:51 +0000 (0:00:01.921) 0:07:03.334 ******* 2026-02-27 00:35:06.943433 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:35:06.943443 | orchestrator | 2026-02-27 00:35:06.943451 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-02-27 00:35:06.943460 | orchestrator | Friday 27 February 2026 00:34:51 +0000 (0:00:00.905) 0:07:04.239 ******* 2026-02-27 00:35:06.943489 | orchestrator | ok: [testbed-manager] 2026-02-27 00:35:06.943499 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:35:06.943506 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:35:06.943513 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:35:06.943521 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:35:06.943529 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:35:06.943537 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:35:06.943544 | orchestrator | 2026-02-27 00:35:06.943552 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-02-27 00:35:06.943560 | orchestrator | Friday 27 February 2026 00:34:52 +0000 (0:00:00.856) 0:07:05.096 ******* 2026-02-27 00:35:06.943568 | orchestrator | ok: [testbed-manager] 2026-02-27 00:35:06.943576 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:35:06.943584 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:35:06.943591 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:35:06.943600 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:35:06.943608 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:35:06.943617 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:35:06.943626 | orchestrator | 2026-02-27 00:35:06.943634 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-02-27 00:35:06.943642 | orchestrator | Friday 27 February 2026 00:34:53 +0000 (0:00:00.890) 0:07:05.987 ******* 2026-02-27 00:35:06.943649 | orchestrator | ok: [testbed-manager] 2026-02-27 00:35:06.943657 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:35:06.943665 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:35:06.943673 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:35:06.943681 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:35:06.943689 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:35:06.943697 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:35:06.943704 | orchestrator | 2026-02-27 00:35:06.943713 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-02-27 00:35:06.943744 | orchestrator | Friday 27 February 2026 00:34:55 +0000 (0:00:01.565) 0:07:07.553 ******* 2026-02-27 00:35:06.943756 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:35:06.943764 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:35:06.943773 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:35:06.943781 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:35:06.943790 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:35:06.943798 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:35:06.943807 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:35:06.943815 | orchestrator | 2026-02-27 00:35:06.943824 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-02-27 00:35:06.943833 | orchestrator | Friday 27 February 2026 00:34:56 +0000 (0:00:01.360) 0:07:08.913 ******* 2026-02-27 00:35:06.943841 | orchestrator | ok: [testbed-manager] 2026-02-27 00:35:06.943849 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:35:06.943857 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:35:06.943865 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:35:06.943874 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:35:06.943883 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:35:06.943891 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:35:06.943900 | orchestrator | 2026-02-27 00:35:06.943908 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-02-27 00:35:06.943917 | orchestrator | Friday 27 February 2026 00:34:58 +0000 (0:00:01.378) 0:07:10.292 ******* 2026-02-27 00:35:06.943925 | orchestrator | changed: [testbed-manager] 2026-02-27 00:35:06.943933 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:35:06.943943 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:35:06.943951 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:35:06.943983 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:35:06.943993 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:35:06.944001 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:35:06.944010 | orchestrator | 2026-02-27 00:35:06.944031 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-02-27 00:35:06.944043 | orchestrator | Friday 27 February 2026 00:34:59 +0000 (0:00:01.386) 0:07:11.678 ******* 2026-02-27 00:35:06.944053 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:35:06.944063 | orchestrator | 2026-02-27 00:35:06.944071 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-02-27 00:35:06.944080 | orchestrator | Friday 27 February 2026 00:35:00 +0000 (0:00:01.069) 0:07:12.748 ******* 2026-02-27 00:35:06.944088 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:35:06.944096 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:35:06.944103 | orchestrator | ok: [testbed-manager] 2026-02-27 00:35:06.944111 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:35:06.944119 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:35:06.944126 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:35:06.944134 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:35:06.944142 | orchestrator | 2026-02-27 00:35:06.944150 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-02-27 00:35:06.944158 | orchestrator | Friday 27 February 2026 00:35:02 +0000 (0:00:01.609) 0:07:14.358 ******* 2026-02-27 00:35:06.944166 | orchestrator | ok: [testbed-manager] 2026-02-27 00:35:06.944175 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:35:06.944183 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:35:06.944191 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:35:06.944199 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:35:06.944223 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:35:06.944233 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:35:06.944241 | orchestrator | 2026-02-27 00:35:06.944249 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-02-27 00:35:06.944258 | orchestrator | Friday 27 February 2026 00:35:03 +0000 (0:00:01.097) 0:07:15.455 ******* 2026-02-27 00:35:06.944267 | orchestrator | ok: [testbed-manager] 2026-02-27 00:35:06.944276 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:35:06.944284 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:35:06.944292 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:35:06.944300 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:35:06.944308 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:35:06.944316 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:35:06.944324 | orchestrator | 2026-02-27 00:35:06.944333 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-02-27 00:35:06.944341 | orchestrator | Friday 27 February 2026 00:35:04 +0000 (0:00:01.110) 0:07:16.566 ******* 2026-02-27 00:35:06.944349 | orchestrator | ok: [testbed-manager] 2026-02-27 00:35:06.944358 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:35:06.944367 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:35:06.944376 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:35:06.944385 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:35:06.944394 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:35:06.944401 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:35:06.944409 | orchestrator | 2026-02-27 00:35:06.944416 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-02-27 00:35:06.944425 | orchestrator | Friday 27 February 2026 00:35:05 +0000 (0:00:01.364) 0:07:17.930 ******* 2026-02-27 00:35:06.944433 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:35:06.944442 | orchestrator | 2026-02-27 00:35:06.944449 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-27 00:35:06.944457 | orchestrator | Friday 27 February 2026 00:35:06 +0000 (0:00:00.910) 0:07:18.841 ******* 2026-02-27 00:35:06.944465 | orchestrator | 2026-02-27 00:35:06.944474 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-27 00:35:06.944493 | orchestrator | Friday 27 February 2026 00:35:06 +0000 (0:00:00.040) 0:07:18.882 ******* 2026-02-27 00:35:06.944501 | orchestrator | 2026-02-27 00:35:06.944510 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-27 00:35:06.944518 | orchestrator | Friday 27 February 2026 00:35:06 +0000 (0:00:00.047) 0:07:18.930 ******* 2026-02-27 00:35:06.944526 | orchestrator | 2026-02-27 00:35:06.944534 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-27 00:35:06.944557 | orchestrator | Friday 27 February 2026 00:35:06 +0000 (0:00:00.040) 0:07:18.970 ******* 2026-02-27 00:35:32.978397 | orchestrator | 2026-02-27 00:35:32.978506 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-27 00:35:32.978522 | orchestrator | Friday 27 February 2026 00:35:06 +0000 (0:00:00.039) 0:07:19.010 ******* 2026-02-27 00:35:32.978534 | orchestrator | 2026-02-27 00:35:32.978545 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-27 00:35:32.978556 | orchestrator | Friday 27 February 2026 00:35:06 +0000 (0:00:00.047) 0:07:19.057 ******* 2026-02-27 00:35:32.978566 | orchestrator | 2026-02-27 00:35:32.978577 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-27 00:35:32.978587 | orchestrator | Friday 27 February 2026 00:35:06 +0000 (0:00:00.054) 0:07:19.112 ******* 2026-02-27 00:35:32.978598 | orchestrator | 2026-02-27 00:35:32.978609 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-27 00:35:32.978619 | orchestrator | Friday 27 February 2026 00:35:06 +0000 (0:00:00.052) 0:07:19.165 ******* 2026-02-27 00:35:32.978631 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:35:32.978643 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:35:32.978654 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:35:32.978665 | orchestrator | 2026-02-27 00:35:32.978676 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-02-27 00:35:32.978688 | orchestrator | Friday 27 February 2026 00:35:08 +0000 (0:00:01.189) 0:07:20.354 ******* 2026-02-27 00:35:32.978700 | orchestrator | changed: [testbed-manager] 2026-02-27 00:35:32.978713 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:35:32.978725 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:35:32.978737 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:35:32.978749 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:35:32.978761 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:35:32.978773 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:35:32.978785 | orchestrator | 2026-02-27 00:35:32.978797 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-02-27 00:35:32.978808 | orchestrator | Friday 27 February 2026 00:35:09 +0000 (0:00:01.440) 0:07:21.794 ******* 2026-02-27 00:35:32.978820 | orchestrator | changed: [testbed-manager] 2026-02-27 00:35:32.978832 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:35:32.978844 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:35:32.978856 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:35:32.978868 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:35:32.978880 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:35:32.978891 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:35:32.978903 | orchestrator | 2026-02-27 00:35:32.978915 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-02-27 00:35:32.978927 | orchestrator | Friday 27 February 2026 00:35:10 +0000 (0:00:01.140) 0:07:22.935 ******* 2026-02-27 00:35:32.978939 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:35:32.979002 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:35:32.979014 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:35:32.979026 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:35:32.979037 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:35:32.979049 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:35:32.979061 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:35:32.979072 | orchestrator | 2026-02-27 00:35:32.979083 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-02-27 00:35:32.979095 | orchestrator | Friday 27 February 2026 00:35:13 +0000 (0:00:02.416) 0:07:25.352 ******* 2026-02-27 00:35:32.979150 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:35:32.979164 | orchestrator | 2026-02-27 00:35:32.979174 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-02-27 00:35:32.979184 | orchestrator | Friday 27 February 2026 00:35:13 +0000 (0:00:00.104) 0:07:25.456 ******* 2026-02-27 00:35:32.979195 | orchestrator | ok: [testbed-manager] 2026-02-27 00:35:32.979205 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:35:32.979215 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:35:32.979226 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:35:32.979236 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:35:32.979247 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:35:32.979254 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:35:32.979260 | orchestrator | 2026-02-27 00:35:32.979267 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-02-27 00:35:32.979274 | orchestrator | Friday 27 February 2026 00:35:14 +0000 (0:00:01.000) 0:07:26.457 ******* 2026-02-27 00:35:32.979281 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:35:32.979287 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:35:32.979293 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:35:32.979299 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:35:32.979305 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:35:32.979312 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:35:32.979318 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:35:32.979324 | orchestrator | 2026-02-27 00:35:32.979332 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-02-27 00:35:32.979342 | orchestrator | Friday 27 February 2026 00:35:14 +0000 (0:00:00.596) 0:07:27.053 ******* 2026-02-27 00:35:32.979354 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:35:32.979365 | orchestrator | 2026-02-27 00:35:32.979375 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-02-27 00:35:32.979385 | orchestrator | Friday 27 February 2026 00:35:15 +0000 (0:00:01.151) 0:07:28.205 ******* 2026-02-27 00:35:32.979395 | orchestrator | ok: [testbed-manager] 2026-02-27 00:35:32.979405 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:35:32.979416 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:35:32.979423 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:35:32.979429 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:35:32.979436 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:35:32.979442 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:35:32.979448 | orchestrator | 2026-02-27 00:35:32.979455 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-02-27 00:35:32.979461 | orchestrator | Friday 27 February 2026 00:35:16 +0000 (0:00:00.892) 0:07:29.097 ******* 2026-02-27 00:35:32.979467 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-02-27 00:35:32.979492 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-02-27 00:35:32.979500 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-02-27 00:35:32.979506 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-02-27 00:35:32.979512 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-02-27 00:35:32.979518 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-02-27 00:35:32.979524 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-02-27 00:35:32.979531 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-02-27 00:35:32.979537 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-02-27 00:35:32.979543 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-02-27 00:35:32.979550 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-02-27 00:35:32.979556 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-02-27 00:35:32.979570 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-02-27 00:35:32.979577 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-02-27 00:35:32.979583 | orchestrator | 2026-02-27 00:35:32.979589 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-02-27 00:35:32.979595 | orchestrator | Friday 27 February 2026 00:35:19 +0000 (0:00:02.530) 0:07:31.628 ******* 2026-02-27 00:35:32.979601 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:35:32.979608 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:35:32.979614 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:35:32.979620 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:35:32.979626 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:35:32.979632 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:35:32.979638 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:35:32.979644 | orchestrator | 2026-02-27 00:35:32.979651 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-02-27 00:35:32.979657 | orchestrator | Friday 27 February 2026 00:35:20 +0000 (0:00:00.725) 0:07:32.354 ******* 2026-02-27 00:35:32.979666 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:35:32.979674 | orchestrator | 2026-02-27 00:35:32.979680 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-02-27 00:35:32.979686 | orchestrator | Friday 27 February 2026 00:35:20 +0000 (0:00:00.879) 0:07:33.234 ******* 2026-02-27 00:35:32.979692 | orchestrator | ok: [testbed-manager] 2026-02-27 00:35:32.979698 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:35:32.979705 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:35:32.979711 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:35:32.979717 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:35:32.979723 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:35:32.979729 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:35:32.979735 | orchestrator | 2026-02-27 00:35:32.979742 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-02-27 00:35:32.979748 | orchestrator | Friday 27 February 2026 00:35:21 +0000 (0:00:00.827) 0:07:34.061 ******* 2026-02-27 00:35:32.979759 | orchestrator | ok: [testbed-manager] 2026-02-27 00:35:32.979765 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:35:32.979772 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:35:32.979778 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:35:32.979784 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:35:32.979790 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:35:32.979796 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:35:32.979802 | orchestrator | 2026-02-27 00:35:32.979808 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-02-27 00:35:32.979815 | orchestrator | Friday 27 February 2026 00:35:22 +0000 (0:00:01.063) 0:07:35.125 ******* 2026-02-27 00:35:32.979821 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:35:32.979827 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:35:32.979833 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:35:32.979839 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:35:32.979846 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:35:32.979852 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:35:32.979858 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:35:32.979864 | orchestrator | 2026-02-27 00:35:32.979870 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-02-27 00:35:32.979876 | orchestrator | Friday 27 February 2026 00:35:23 +0000 (0:00:00.534) 0:07:35.660 ******* 2026-02-27 00:35:32.979882 | orchestrator | ok: [testbed-manager] 2026-02-27 00:35:32.979889 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:35:32.979895 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:35:32.979901 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:35:32.979907 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:35:32.979918 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:35:32.979924 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:35:32.979930 | orchestrator | 2026-02-27 00:35:32.979936 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-02-27 00:35:32.979972 | orchestrator | Friday 27 February 2026 00:35:24 +0000 (0:00:01.545) 0:07:37.205 ******* 2026-02-27 00:35:32.979980 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:35:32.979987 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:35:32.979993 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:35:32.979999 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:35:32.980005 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:35:32.980011 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:35:32.980017 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:35:32.980024 | orchestrator | 2026-02-27 00:35:32.980030 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-02-27 00:35:32.980036 | orchestrator | Friday 27 February 2026 00:35:25 +0000 (0:00:00.614) 0:07:37.819 ******* 2026-02-27 00:35:32.980042 | orchestrator | ok: [testbed-manager] 2026-02-27 00:35:32.980049 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:35:32.980055 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:35:32.980061 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:35:32.980067 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:35:32.980073 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:35:32.980084 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:36:05.192924 | orchestrator | 2026-02-27 00:36:05.193022 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-02-27 00:36:05.193029 | orchestrator | Friday 27 February 2026 00:35:32 +0000 (0:00:07.388) 0:07:45.208 ******* 2026-02-27 00:36:05.193033 | orchestrator | ok: [testbed-manager] 2026-02-27 00:36:05.193038 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:36:05.193043 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:36:05.193047 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:36:05.193051 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:36:05.193056 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:36:05.193060 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:36:05.193064 | orchestrator | 2026-02-27 00:36:05.193068 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-02-27 00:36:05.193072 | orchestrator | Friday 27 February 2026 00:35:34 +0000 (0:00:01.577) 0:07:46.785 ******* 2026-02-27 00:36:05.193076 | orchestrator | ok: [testbed-manager] 2026-02-27 00:36:05.193080 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:36:05.193084 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:36:05.193088 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:36:05.193092 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:36:05.193096 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:36:05.193100 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:36:05.193104 | orchestrator | 2026-02-27 00:36:05.193108 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-02-27 00:36:05.193112 | orchestrator | Friday 27 February 2026 00:35:36 +0000 (0:00:01.661) 0:07:48.446 ******* 2026-02-27 00:36:05.193116 | orchestrator | ok: [testbed-manager] 2026-02-27 00:36:05.193120 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:36:05.193124 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:36:05.193128 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:36:05.193132 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:36:05.193136 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:36:05.193140 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:36:05.193144 | orchestrator | 2026-02-27 00:36:05.193148 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-27 00:36:05.193152 | orchestrator | Friday 27 February 2026 00:35:37 +0000 (0:00:01.664) 0:07:50.111 ******* 2026-02-27 00:36:05.193156 | orchestrator | ok: [testbed-manager] 2026-02-27 00:36:05.193160 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:36:05.193164 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:36:05.193181 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:36:05.193186 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:36:05.193189 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:36:05.193193 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:36:05.193197 | orchestrator | 2026-02-27 00:36:05.193201 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-27 00:36:05.193205 | orchestrator | Friday 27 February 2026 00:35:38 +0000 (0:00:00.869) 0:07:50.981 ******* 2026-02-27 00:36:05.193209 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:36:05.193213 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:36:05.193217 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:36:05.193221 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:36:05.193226 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:36:05.193229 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:36:05.193233 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:36:05.193237 | orchestrator | 2026-02-27 00:36:05.193241 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-02-27 00:36:05.193245 | orchestrator | Friday 27 February 2026 00:35:39 +0000 (0:00:01.008) 0:07:51.990 ******* 2026-02-27 00:36:05.193249 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:36:05.193253 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:36:05.193257 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:36:05.193261 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:36:05.193265 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:36:05.193269 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:36:05.193273 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:36:05.193277 | orchestrator | 2026-02-27 00:36:05.193281 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-02-27 00:36:05.193285 | orchestrator | Friday 27 February 2026 00:35:40 +0000 (0:00:00.503) 0:07:52.493 ******* 2026-02-27 00:36:05.193289 | orchestrator | ok: [testbed-manager] 2026-02-27 00:36:05.193301 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:36:05.193305 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:36:05.193309 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:36:05.193313 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:36:05.193317 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:36:05.193321 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:36:05.193325 | orchestrator | 2026-02-27 00:36:05.193329 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-02-27 00:36:05.193333 | orchestrator | Friday 27 February 2026 00:35:40 +0000 (0:00:00.536) 0:07:53.030 ******* 2026-02-27 00:36:05.193337 | orchestrator | ok: [testbed-manager] 2026-02-27 00:36:05.193341 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:36:05.193345 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:36:05.193349 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:36:05.193353 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:36:05.193357 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:36:05.193361 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:36:05.193365 | orchestrator | 2026-02-27 00:36:05.193369 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-02-27 00:36:05.193373 | orchestrator | Friday 27 February 2026 00:35:41 +0000 (0:00:00.569) 0:07:53.600 ******* 2026-02-27 00:36:05.193377 | orchestrator | ok: [testbed-manager] 2026-02-27 00:36:05.193381 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:36:05.193385 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:36:05.193389 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:36:05.193393 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:36:05.193397 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:36:05.193401 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:36:05.193405 | orchestrator | 2026-02-27 00:36:05.193409 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-02-27 00:36:05.193413 | orchestrator | Friday 27 February 2026 00:35:42 +0000 (0:00:00.720) 0:07:54.321 ******* 2026-02-27 00:36:05.193417 | orchestrator | ok: [testbed-manager] 2026-02-27 00:36:05.193421 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:36:05.193429 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:36:05.193433 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:36:05.193437 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:36:05.193441 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:36:05.193445 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:36:05.193449 | orchestrator | 2026-02-27 00:36:05.193461 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-02-27 00:36:05.193466 | orchestrator | Friday 27 February 2026 00:35:47 +0000 (0:00:05.607) 0:07:59.928 ******* 2026-02-27 00:36:05.193470 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:36:05.193474 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:36:05.193478 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:36:05.193482 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:36:05.193486 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:36:05.193490 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:36:05.193494 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:36:05.193498 | orchestrator | 2026-02-27 00:36:05.193502 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-02-27 00:36:05.193506 | orchestrator | Friday 27 February 2026 00:35:48 +0000 (0:00:00.573) 0:08:00.502 ******* 2026-02-27 00:36:05.193511 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:36:05.193516 | orchestrator | 2026-02-27 00:36:05.193521 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-02-27 00:36:05.193525 | orchestrator | Friday 27 February 2026 00:35:49 +0000 (0:00:01.030) 0:08:01.532 ******* 2026-02-27 00:36:05.193530 | orchestrator | ok: [testbed-manager] 2026-02-27 00:36:05.193535 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:36:05.193539 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:36:05.193544 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:36:05.193548 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:36:05.193553 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:36:05.193557 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:36:05.193562 | orchestrator | 2026-02-27 00:36:05.193567 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-02-27 00:36:05.193571 | orchestrator | Friday 27 February 2026 00:35:51 +0000 (0:00:02.057) 0:08:03.590 ******* 2026-02-27 00:36:05.193576 | orchestrator | ok: [testbed-manager] 2026-02-27 00:36:05.193580 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:36:05.193585 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:36:05.193590 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:36:05.193594 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:36:05.193599 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:36:05.193603 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:36:05.193608 | orchestrator | 2026-02-27 00:36:05.193612 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-02-27 00:36:05.193617 | orchestrator | Friday 27 February 2026 00:35:52 +0000 (0:00:01.096) 0:08:04.687 ******* 2026-02-27 00:36:05.193622 | orchestrator | ok: [testbed-manager] 2026-02-27 00:36:05.193626 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:36:05.193631 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:36:05.193635 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:36:05.193640 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:36:05.193644 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:36:05.193649 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:36:05.193654 | orchestrator | 2026-02-27 00:36:05.193658 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-02-27 00:36:05.193663 | orchestrator | Friday 27 February 2026 00:35:53 +0000 (0:00:00.846) 0:08:05.534 ******* 2026-02-27 00:36:05.193670 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-27 00:36:05.193676 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-27 00:36:05.193684 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-27 00:36:05.193688 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-27 00:36:05.193693 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-27 00:36:05.193697 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-27 00:36:05.193702 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-27 00:36:05.193707 | orchestrator | 2026-02-27 00:36:05.193711 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-02-27 00:36:05.193716 | orchestrator | Friday 27 February 2026 00:35:55 +0000 (0:00:01.914) 0:08:07.448 ******* 2026-02-27 00:36:05.193721 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:36:05.193725 | orchestrator | 2026-02-27 00:36:05.193730 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-02-27 00:36:05.193735 | orchestrator | Friday 27 February 2026 00:35:56 +0000 (0:00:00.818) 0:08:08.267 ******* 2026-02-27 00:36:05.193739 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:36:05.193744 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:36:05.193748 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:36:05.193753 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:36:05.193758 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:36:05.193763 | orchestrator | changed: [testbed-manager] 2026-02-27 00:36:05.193767 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:36:05.193772 | orchestrator | 2026-02-27 00:36:05.193779 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-02-27 00:36:37.311679 | orchestrator | Friday 27 February 2026 00:36:05 +0000 (0:00:09.158) 0:08:17.426 ******* 2026-02-27 00:36:37.311796 | orchestrator | ok: [testbed-manager] 2026-02-27 00:36:37.311819 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:36:37.311837 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:36:37.311852 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:36:37.311868 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:36:37.311883 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:36:37.311897 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:36:37.311937 | orchestrator | 2026-02-27 00:36:37.311956 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-02-27 00:36:37.311972 | orchestrator | Friday 27 February 2026 00:36:07 +0000 (0:00:02.041) 0:08:19.467 ******* 2026-02-27 00:36:37.311985 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:36:37.311995 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:36:37.312004 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:36:37.312012 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:36:37.312021 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:36:37.312030 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:36:37.312039 | orchestrator | 2026-02-27 00:36:37.312048 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-02-27 00:36:37.312057 | orchestrator | Friday 27 February 2026 00:36:08 +0000 (0:00:01.296) 0:08:20.763 ******* 2026-02-27 00:36:37.312070 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:36:37.312087 | orchestrator | changed: [testbed-manager] 2026-02-27 00:36:37.312102 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:36:37.312117 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:36:37.312133 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:36:37.312178 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:36:37.312194 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:36:37.312210 | orchestrator | 2026-02-27 00:36:37.312225 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-02-27 00:36:37.312240 | orchestrator | 2026-02-27 00:36:37.312254 | orchestrator | TASK [Include hardening role] ************************************************** 2026-02-27 00:36:37.312268 | orchestrator | Friday 27 February 2026 00:36:09 +0000 (0:00:01.315) 0:08:22.079 ******* 2026-02-27 00:36:37.312282 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:36:37.312298 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:36:37.312314 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:36:37.312332 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:36:37.312346 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:36:37.312361 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:36:37.312376 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:36:37.312390 | orchestrator | 2026-02-27 00:36:37.312405 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-02-27 00:36:37.312421 | orchestrator | 2026-02-27 00:36:37.312438 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-02-27 00:36:37.312453 | orchestrator | Friday 27 February 2026 00:36:10 +0000 (0:00:00.804) 0:08:22.884 ******* 2026-02-27 00:36:37.312468 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:36:37.312485 | orchestrator | changed: [testbed-manager] 2026-02-27 00:36:37.312500 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:36:37.312515 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:36:37.312530 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:36:37.312545 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:36:37.312560 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:36:37.312576 | orchestrator | 2026-02-27 00:36:37.312591 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-02-27 00:36:37.312625 | orchestrator | Friday 27 February 2026 00:36:12 +0000 (0:00:01.380) 0:08:24.264 ******* 2026-02-27 00:36:37.312636 | orchestrator | ok: [testbed-manager] 2026-02-27 00:36:37.312645 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:36:37.312654 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:36:37.312662 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:36:37.312671 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:36:37.312680 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:36:37.312688 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:36:37.312697 | orchestrator | 2026-02-27 00:36:37.312705 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-02-27 00:36:37.312714 | orchestrator | Friday 27 February 2026 00:36:13 +0000 (0:00:01.440) 0:08:25.704 ******* 2026-02-27 00:36:37.312723 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:36:37.312732 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:36:37.312740 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:36:37.312749 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:36:37.312758 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:36:37.312766 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:36:37.312775 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:36:37.312783 | orchestrator | 2026-02-27 00:36:37.312792 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-02-27 00:36:37.312801 | orchestrator | Friday 27 February 2026 00:36:14 +0000 (0:00:00.584) 0:08:26.289 ******* 2026-02-27 00:36:37.312810 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:36:37.312821 | orchestrator | 2026-02-27 00:36:37.312829 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-02-27 00:36:37.312838 | orchestrator | Friday 27 February 2026 00:36:15 +0000 (0:00:01.041) 0:08:27.330 ******* 2026-02-27 00:36:37.312849 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:36:37.312959 | orchestrator | 2026-02-27 00:36:37.312971 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-02-27 00:36:37.312979 | orchestrator | Friday 27 February 2026 00:36:15 +0000 (0:00:00.880) 0:08:28.210 ******* 2026-02-27 00:36:37.312988 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:36:37.312997 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:36:37.313010 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:36:37.313025 | orchestrator | changed: [testbed-manager] 2026-02-27 00:36:37.313039 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:36:37.313054 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:36:37.313069 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:36:37.313082 | orchestrator | 2026-02-27 00:36:37.313120 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-02-27 00:36:37.313136 | orchestrator | Friday 27 February 2026 00:36:25 +0000 (0:00:09.663) 0:08:37.873 ******* 2026-02-27 00:36:37.313152 | orchestrator | changed: [testbed-manager] 2026-02-27 00:36:37.313167 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:36:37.313182 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:36:37.313193 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:36:37.313202 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:36:37.313211 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:36:37.313220 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:36:37.313228 | orchestrator | 2026-02-27 00:36:37.313237 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-02-27 00:36:37.313246 | orchestrator | Friday 27 February 2026 00:36:26 +0000 (0:00:01.068) 0:08:38.942 ******* 2026-02-27 00:36:37.313255 | orchestrator | changed: [testbed-manager] 2026-02-27 00:36:37.313263 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:36:37.313272 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:36:37.313281 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:36:37.313289 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:36:37.313298 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:36:37.313307 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:36:37.313315 | orchestrator | 2026-02-27 00:36:37.313324 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-02-27 00:36:37.313333 | orchestrator | Friday 27 February 2026 00:36:28 +0000 (0:00:01.389) 0:08:40.332 ******* 2026-02-27 00:36:37.313342 | orchestrator | changed: [testbed-manager] 2026-02-27 00:36:37.313350 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:36:37.313359 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:36:37.313367 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:36:37.313376 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:36:37.313385 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:36:37.313393 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:36:37.313402 | orchestrator | 2026-02-27 00:36:37.313411 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-02-27 00:36:37.313419 | orchestrator | Friday 27 February 2026 00:36:29 +0000 (0:00:01.910) 0:08:42.242 ******* 2026-02-27 00:36:37.313428 | orchestrator | changed: [testbed-manager] 2026-02-27 00:36:37.313437 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:36:37.313445 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:36:37.313454 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:36:37.313462 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:36:37.313471 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:36:37.313479 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:36:37.313488 | orchestrator | 2026-02-27 00:36:37.313497 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-02-27 00:36:37.313506 | orchestrator | Friday 27 February 2026 00:36:31 +0000 (0:00:01.243) 0:08:43.486 ******* 2026-02-27 00:36:37.313514 | orchestrator | changed: [testbed-manager] 2026-02-27 00:36:37.313523 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:36:37.313540 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:36:37.313548 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:36:37.313557 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:36:37.313566 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:36:37.313574 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:36:37.313583 | orchestrator | 2026-02-27 00:36:37.313591 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-02-27 00:36:37.313600 | orchestrator | 2026-02-27 00:36:37.313616 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-02-27 00:36:37.313625 | orchestrator | Friday 27 February 2026 00:36:32 +0000 (0:00:01.117) 0:08:44.603 ******* 2026-02-27 00:36:37.313634 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:36:37.313643 | orchestrator | 2026-02-27 00:36:37.313651 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-02-27 00:36:37.313660 | orchestrator | Friday 27 February 2026 00:36:33 +0000 (0:00:00.842) 0:08:45.445 ******* 2026-02-27 00:36:37.313669 | orchestrator | ok: [testbed-manager] 2026-02-27 00:36:37.313677 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:36:37.313686 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:36:37.313695 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:36:37.313704 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:36:37.313712 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:36:37.313721 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:36:37.313730 | orchestrator | 2026-02-27 00:36:37.313738 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-02-27 00:36:37.313747 | orchestrator | Friday 27 February 2026 00:36:34 +0000 (0:00:01.100) 0:08:46.545 ******* 2026-02-27 00:36:37.313756 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:36:37.313765 | orchestrator | changed: [testbed-manager] 2026-02-27 00:36:37.313774 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:36:37.313782 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:36:37.313791 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:36:37.313800 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:36:37.313809 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:36:37.313817 | orchestrator | 2026-02-27 00:36:37.313826 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-02-27 00:36:37.313834 | orchestrator | Friday 27 February 2026 00:36:35 +0000 (0:00:01.134) 0:08:47.680 ******* 2026-02-27 00:36:37.313843 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:36:37.313852 | orchestrator | 2026-02-27 00:36:37.313861 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-02-27 00:36:37.313869 | orchestrator | Friday 27 February 2026 00:36:36 +0000 (0:00:01.022) 0:08:48.703 ******* 2026-02-27 00:36:37.313878 | orchestrator | ok: [testbed-manager] 2026-02-27 00:36:37.313887 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:36:37.313895 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:36:37.313904 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:36:37.313941 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:36:37.313951 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:36:37.313960 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:36:37.313969 | orchestrator | 2026-02-27 00:36:37.313985 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-02-27 00:36:38.932835 | orchestrator | Friday 27 February 2026 00:36:37 +0000 (0:00:00.837) 0:08:49.541 ******* 2026-02-27 00:36:38.932983 | orchestrator | changed: [testbed-manager] 2026-02-27 00:36:38.933003 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:36:38.933015 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:36:38.933026 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:36:38.933037 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:36:38.933048 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:36:38.933059 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:36:38.933099 | orchestrator | 2026-02-27 00:36:38.933111 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 00:36:38.933124 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-27 00:36:38.933136 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-27 00:36:38.933147 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-27 00:36:38.933158 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-27 00:36:38.933168 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-02-27 00:36:38.933179 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-02-27 00:36:38.933190 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-02-27 00:36:38.933201 | orchestrator | 2026-02-27 00:36:38.933212 | orchestrator | 2026-02-27 00:36:38.933222 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 00:36:38.933234 | orchestrator | Friday 27 February 2026 00:36:38 +0000 (0:00:01.089) 0:08:50.630 ******* 2026-02-27 00:36:38.933245 | orchestrator | =============================================================================== 2026-02-27 00:36:38.933255 | orchestrator | osism.commons.packages : Install required packages --------------------- 85.57s 2026-02-27 00:36:38.933266 | orchestrator | osism.commons.packages : Download required packages -------------------- 62.81s 2026-02-27 00:36:38.933277 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 35.42s 2026-02-27 00:36:38.933287 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.74s 2026-02-27 00:36:38.933298 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.51s 2026-02-27 00:36:38.933323 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.13s 2026-02-27 00:36:38.933335 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.97s 2026-02-27 00:36:38.933346 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.66s 2026-02-27 00:36:38.933361 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.29s 2026-02-27 00:36:38.933379 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.16s 2026-02-27 00:36:38.933398 | orchestrator | osism.services.docker : Install containerd package ---------------------- 8.74s 2026-02-27 00:36:38.933416 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.33s 2026-02-27 00:36:38.933434 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.19s 2026-02-27 00:36:38.933452 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.08s 2026-02-27 00:36:38.933471 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.64s 2026-02-27 00:36:38.933488 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.39s 2026-02-27 00:36:38.933507 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.37s 2026-02-27 00:36:38.933526 | orchestrator | osism.commons.services : Populate service facts ------------------------- 6.05s 2026-02-27 00:36:38.933545 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.86s 2026-02-27 00:36:38.933564 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 5.85s 2026-02-27 00:36:39.249776 | orchestrator | + osism apply fail2ban 2026-02-27 00:36:52.227457 | orchestrator | 2026-02-27 00:36:52 | INFO  | Task 8fcbbe18-cf1b-489b-b30a-b946f6b7087a (fail2ban) was prepared for execution. 2026-02-27 00:36:52.227565 | orchestrator | 2026-02-27 00:36:52 | INFO  | It takes a moment until task 8fcbbe18-cf1b-489b-b30a-b946f6b7087a (fail2ban) has been started and output is visible here. 2026-02-27 00:37:13.819744 | orchestrator | 2026-02-27 00:37:13.819851 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-02-27 00:37:13.819866 | orchestrator | 2026-02-27 00:37:13.819877 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-02-27 00:37:13.819887 | orchestrator | Friday 27 February 2026 00:36:56 +0000 (0:00:00.274) 0:00:00.274 ******* 2026-02-27 00:37:13.819956 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-27 00:37:13.819969 | orchestrator | 2026-02-27 00:37:13.819979 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-02-27 00:37:13.819989 | orchestrator | Friday 27 February 2026 00:36:58 +0000 (0:00:01.098) 0:00:01.372 ******* 2026-02-27 00:37:13.819999 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:37:13.820012 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:37:13.820022 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:37:13.820031 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:37:13.820041 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:37:13.820051 | orchestrator | changed: [testbed-manager] 2026-02-27 00:37:13.820060 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:37:13.820071 | orchestrator | 2026-02-27 00:37:13.820081 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-02-27 00:37:13.820090 | orchestrator | Friday 27 February 2026 00:37:08 +0000 (0:00:10.831) 0:00:12.203 ******* 2026-02-27 00:37:13.820107 | orchestrator | changed: [testbed-manager] 2026-02-27 00:37:13.820124 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:37:13.820139 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:37:13.820153 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:37:13.820174 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:37:13.820195 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:37:13.820212 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:37:13.820228 | orchestrator | 2026-02-27 00:37:13.820244 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-02-27 00:37:13.820259 | orchestrator | Friday 27 February 2026 00:37:10 +0000 (0:00:01.453) 0:00:13.656 ******* 2026-02-27 00:37:13.820275 | orchestrator | ok: [testbed-manager] 2026-02-27 00:37:13.820293 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:37:13.820310 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:37:13.820327 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:37:13.820343 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:37:13.820360 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:37:13.820375 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:37:13.820393 | orchestrator | 2026-02-27 00:37:13.820410 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-02-27 00:37:13.820428 | orchestrator | Friday 27 February 2026 00:37:11 +0000 (0:00:01.453) 0:00:15.110 ******* 2026-02-27 00:37:13.820446 | orchestrator | changed: [testbed-manager] 2026-02-27 00:37:13.820459 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:37:13.820470 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:37:13.820481 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:37:13.820492 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:37:13.820503 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:37:13.820515 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:37:13.820525 | orchestrator | 2026-02-27 00:37:13.820536 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 00:37:13.820547 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 00:37:13.820588 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 00:37:13.820601 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 00:37:13.820613 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 00:37:13.820624 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 00:37:13.820635 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 00:37:13.820647 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 00:37:13.820657 | orchestrator | 2026-02-27 00:37:13.820666 | orchestrator | 2026-02-27 00:37:13.820676 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 00:37:13.820686 | orchestrator | Friday 27 February 2026 00:37:13 +0000 (0:00:01.656) 0:00:16.766 ******* 2026-02-27 00:37:13.820696 | orchestrator | =============================================================================== 2026-02-27 00:37:13.820705 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 10.83s 2026-02-27 00:37:13.820714 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.66s 2026-02-27 00:37:13.820724 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.45s 2026-02-27 00:37:13.820734 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.45s 2026-02-27 00:37:13.820743 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.10s 2026-02-27 00:37:14.142149 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-02-27 00:37:14.142248 | orchestrator | + osism apply network 2026-02-27 00:37:26.259795 | orchestrator | 2026-02-27 00:37:26 | INFO  | Task 89f195da-6e8d-4b2f-a9bb-c27f48675f66 (network) was prepared for execution. 2026-02-27 00:37:26.259955 | orchestrator | 2026-02-27 00:37:26 | INFO  | It takes a moment until task 89f195da-6e8d-4b2f-a9bb-c27f48675f66 (network) has been started and output is visible here. 2026-02-27 00:37:54.841162 | orchestrator | 2026-02-27 00:37:54.841278 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-02-27 00:37:54.841296 | orchestrator | 2026-02-27 00:37:54.841308 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-02-27 00:37:54.841321 | orchestrator | Friday 27 February 2026 00:37:30 +0000 (0:00:00.274) 0:00:00.274 ******* 2026-02-27 00:37:54.841333 | orchestrator | ok: [testbed-manager] 2026-02-27 00:37:54.841347 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:37:54.841358 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:37:54.841370 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:37:54.841381 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:37:54.841391 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:37:54.841402 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:37:54.841413 | orchestrator | 2026-02-27 00:37:54.841425 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-02-27 00:37:54.841436 | orchestrator | Friday 27 February 2026 00:37:31 +0000 (0:00:00.744) 0:00:01.019 ******* 2026-02-27 00:37:54.841449 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-27 00:37:54.841463 | orchestrator | 2026-02-27 00:37:54.841474 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-02-27 00:37:54.841507 | orchestrator | Friday 27 February 2026 00:37:32 +0000 (0:00:01.298) 0:00:02.318 ******* 2026-02-27 00:37:54.841519 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:37:54.841530 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:37:54.841541 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:37:54.841552 | orchestrator | ok: [testbed-manager] 2026-02-27 00:37:54.841562 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:37:54.841573 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:37:54.841583 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:37:54.841594 | orchestrator | 2026-02-27 00:37:54.841605 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-02-27 00:37:54.841615 | orchestrator | Friday 27 February 2026 00:37:34 +0000 (0:00:01.748) 0:00:04.066 ******* 2026-02-27 00:37:54.841626 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:37:54.841637 | orchestrator | ok: [testbed-manager] 2026-02-27 00:37:54.841648 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:37:54.841659 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:37:54.841670 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:37:54.841681 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:37:54.841691 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:37:54.841702 | orchestrator | 2026-02-27 00:37:54.841713 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-02-27 00:37:54.841723 | orchestrator | Friday 27 February 2026 00:37:36 +0000 (0:00:01.604) 0:00:05.670 ******* 2026-02-27 00:37:54.841734 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-02-27 00:37:54.841745 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-02-27 00:37:54.841756 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-02-27 00:37:54.841767 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-02-27 00:37:54.841777 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-02-27 00:37:54.841788 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-02-27 00:37:54.841799 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-02-27 00:37:54.841809 | orchestrator | 2026-02-27 00:37:54.841838 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-02-27 00:37:54.841854 | orchestrator | Friday 27 February 2026 00:37:37 +0000 (0:00:01.010) 0:00:06.681 ******* 2026-02-27 00:37:54.841865 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-27 00:37:54.841904 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-27 00:37:54.841916 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-27 00:37:54.841927 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-27 00:37:54.841938 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-27 00:37:54.841948 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-27 00:37:54.841959 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-27 00:37:54.841970 | orchestrator | 2026-02-27 00:37:54.841981 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-02-27 00:37:54.841992 | orchestrator | Friday 27 February 2026 00:37:40 +0000 (0:00:03.305) 0:00:09.987 ******* 2026-02-27 00:37:54.842002 | orchestrator | changed: [testbed-manager] 2026-02-27 00:37:54.842013 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:37:54.842080 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:37:54.842091 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:37:54.842102 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:37:54.842113 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:37:54.842123 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:37:54.842134 | orchestrator | 2026-02-27 00:37:54.842145 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-02-27 00:37:54.842156 | orchestrator | Friday 27 February 2026 00:37:41 +0000 (0:00:01.574) 0:00:11.561 ******* 2026-02-27 00:37:54.842167 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-27 00:37:54.842178 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-27 00:37:54.842189 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-27 00:37:54.842199 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-27 00:37:54.842220 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-27 00:37:54.842231 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-27 00:37:54.842242 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-27 00:37:54.842252 | orchestrator | 2026-02-27 00:37:54.842263 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-02-27 00:37:54.842274 | orchestrator | Friday 27 February 2026 00:37:43 +0000 (0:00:01.816) 0:00:13.378 ******* 2026-02-27 00:37:54.842285 | orchestrator | ok: [testbed-manager] 2026-02-27 00:37:54.842296 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:37:54.842307 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:37:54.842318 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:37:54.842329 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:37:54.842340 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:37:54.842350 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:37:54.842361 | orchestrator | 2026-02-27 00:37:54.842372 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-02-27 00:37:54.842401 | orchestrator | Friday 27 February 2026 00:37:44 +0000 (0:00:01.143) 0:00:14.521 ******* 2026-02-27 00:37:54.842413 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:37:54.842424 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:37:54.842435 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:37:54.842445 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:37:54.842456 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:37:54.842467 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:37:54.842478 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:37:54.842488 | orchestrator | 2026-02-27 00:37:54.842499 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-02-27 00:37:54.842510 | orchestrator | Friday 27 February 2026 00:37:45 +0000 (0:00:00.706) 0:00:15.227 ******* 2026-02-27 00:37:54.842521 | orchestrator | ok: [testbed-manager] 2026-02-27 00:37:54.842532 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:37:54.842542 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:37:54.842553 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:37:54.842564 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:37:54.842575 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:37:54.842585 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:37:54.842596 | orchestrator | 2026-02-27 00:37:54.842607 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-02-27 00:37:54.842618 | orchestrator | Friday 27 February 2026 00:37:47 +0000 (0:00:02.115) 0:00:17.342 ******* 2026-02-27 00:37:54.842628 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:37:54.842639 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:37:54.842650 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:37:54.842661 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:37:54.842671 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:37:54.842682 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:37:54.842694 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-02-27 00:37:54.842707 | orchestrator | 2026-02-27 00:37:54.842718 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-02-27 00:37:54.842729 | orchestrator | Friday 27 February 2026 00:37:48 +0000 (0:00:00.951) 0:00:18.294 ******* 2026-02-27 00:37:54.842739 | orchestrator | ok: [testbed-manager] 2026-02-27 00:37:54.842750 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:37:54.842761 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:37:54.842772 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:37:54.842782 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:37:54.842793 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:37:54.842804 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:37:54.842814 | orchestrator | 2026-02-27 00:37:54.842825 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-02-27 00:37:54.842836 | orchestrator | Friday 27 February 2026 00:37:50 +0000 (0:00:01.634) 0:00:19.928 ******* 2026-02-27 00:37:54.842847 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-27 00:37:54.842867 | orchestrator | 2026-02-27 00:37:54.842903 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-02-27 00:37:54.842914 | orchestrator | Friday 27 February 2026 00:37:51 +0000 (0:00:01.360) 0:00:21.288 ******* 2026-02-27 00:37:54.842925 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:37:54.842936 | orchestrator | ok: [testbed-manager] 2026-02-27 00:37:54.842947 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:37:54.842958 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:37:54.842975 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:37:54.842986 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:37:54.842997 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:37:54.843010 | orchestrator | 2026-02-27 00:37:54.843028 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-02-27 00:37:54.843047 | orchestrator | Friday 27 February 2026 00:37:52 +0000 (0:00:00.952) 0:00:22.241 ******* 2026-02-27 00:37:54.843064 | orchestrator | ok: [testbed-manager] 2026-02-27 00:37:54.843083 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:37:54.843101 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:37:54.843120 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:37:54.843137 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:37:54.843148 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:37:54.843159 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:37:54.843170 | orchestrator | 2026-02-27 00:37:54.843180 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-02-27 00:37:54.843191 | orchestrator | Friday 27 February 2026 00:37:53 +0000 (0:00:00.887) 0:00:23.128 ******* 2026-02-27 00:37:54.843202 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-02-27 00:37:54.843213 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-02-27 00:37:54.843223 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-02-27 00:37:54.843234 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-02-27 00:37:54.843244 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-27 00:37:54.843255 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-02-27 00:37:54.843266 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-27 00:37:54.843276 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-02-27 00:37:54.843287 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-27 00:37:54.843298 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-27 00:37:54.843308 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-27 00:37:54.843319 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-27 00:37:54.843330 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-02-27 00:37:54.843341 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-27 00:37:54.843357 | orchestrator | 2026-02-27 00:37:54.843386 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-02-27 00:38:12.532790 | orchestrator | Friday 27 February 2026 00:37:54 +0000 (0:00:01.278) 0:00:24.406 ******* 2026-02-27 00:38:12.532924 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:38:12.532939 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:38:12.532948 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:38:12.532956 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:38:12.532965 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:38:12.532973 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:38:12.532981 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:38:12.532989 | orchestrator | 2026-02-27 00:38:12.533020 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-02-27 00:38:12.533029 | orchestrator | Friday 27 February 2026 00:37:55 +0000 (0:00:00.642) 0:00:25.049 ******* 2026-02-27 00:38:12.533039 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-1, testbed-node-3, testbed-node-0, testbed-node-2, testbed-node-4, testbed-node-5 2026-02-27 00:38:12.533049 | orchestrator | 2026-02-27 00:38:12.533057 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-02-27 00:38:12.533065 | orchestrator | Friday 27 February 2026 00:38:00 +0000 (0:00:04.935) 0:00:29.984 ******* 2026-02-27 00:38:12.533078 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-02-27 00:38:12.533095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-02-27 00:38:12.533110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-02-27 00:38:12.533123 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-02-27 00:38:12.533137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-02-27 00:38:12.533166 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-02-27 00:38:12.533182 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-02-27 00:38:12.533195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-02-27 00:38:12.533209 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-02-27 00:38:12.533223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-02-27 00:38:12.533243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-02-27 00:38:12.533269 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-02-27 00:38:12.533287 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-02-27 00:38:12.533295 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-02-27 00:38:12.533303 | orchestrator | 2026-02-27 00:38:12.533312 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-02-27 00:38:12.533320 | orchestrator | Friday 27 February 2026 00:38:06 +0000 (0:00:06.004) 0:00:35.989 ******* 2026-02-27 00:38:12.533328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-02-27 00:38:12.533336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-02-27 00:38:12.533345 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-02-27 00:38:12.533354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-02-27 00:38:12.533363 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-02-27 00:38:12.533377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-02-27 00:38:12.533386 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-02-27 00:38:12.533396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-02-27 00:38:12.533404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-02-27 00:38:12.533414 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-02-27 00:38:12.533423 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-02-27 00:38:12.533437 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-02-27 00:38:12.533455 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-02-27 00:38:18.882627 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-02-27 00:38:18.882749 | orchestrator | 2026-02-27 00:38:18.882773 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-02-27 00:38:18.882789 | orchestrator | Friday 27 February 2026 00:38:12 +0000 (0:00:06.098) 0:00:42.088 ******* 2026-02-27 00:38:18.882805 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-27 00:38:18.882818 | orchestrator | 2026-02-27 00:38:18.882832 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-02-27 00:38:18.882845 | orchestrator | Friday 27 February 2026 00:38:13 +0000 (0:00:01.291) 0:00:43.379 ******* 2026-02-27 00:38:18.882859 | orchestrator | ok: [testbed-manager] 2026-02-27 00:38:18.882953 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:38:18.882968 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:38:18.882983 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:38:18.882997 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:38:18.883011 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:38:18.883024 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:38:18.883038 | orchestrator | 2026-02-27 00:38:18.883052 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-02-27 00:38:18.883065 | orchestrator | Friday 27 February 2026 00:38:15 +0000 (0:00:01.205) 0:00:44.585 ******* 2026-02-27 00:38:18.883079 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-27 00:38:18.883092 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-27 00:38:18.883107 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-27 00:38:18.883120 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-27 00:38:18.883133 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-27 00:38:18.883147 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-27 00:38:18.883160 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-27 00:38:18.883174 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-27 00:38:18.883187 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:38:18.883201 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-27 00:38:18.883214 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-27 00:38:18.883247 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-27 00:38:18.883261 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-27 00:38:18.883275 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:38:18.883317 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-27 00:38:18.883332 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-27 00:38:18.883346 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-27 00:38:18.883359 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-27 00:38:18.883370 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:38:18.883378 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-27 00:38:18.883386 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-27 00:38:18.883394 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-27 00:38:18.883402 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-27 00:38:18.883410 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:38:18.883418 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-27 00:38:18.883425 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-27 00:38:18.883433 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-27 00:38:18.883441 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-27 00:38:18.883449 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:38:18.883456 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:38:18.883464 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-27 00:38:18.883473 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-27 00:38:18.883482 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-27 00:38:18.883491 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-27 00:38:18.883499 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:38:18.883509 | orchestrator | 2026-02-27 00:38:18.883518 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-02-27 00:38:18.883546 | orchestrator | Friday 27 February 2026 00:38:17 +0000 (0:00:02.038) 0:00:46.624 ******* 2026-02-27 00:38:18.883556 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:38:18.883565 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:38:18.883574 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:38:18.883583 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:38:18.883592 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:38:18.883601 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:38:18.883610 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:38:18.883619 | orchestrator | 2026-02-27 00:38:18.883628 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-02-27 00:38:18.883638 | orchestrator | Friday 27 February 2026 00:38:17 +0000 (0:00:00.689) 0:00:47.314 ******* 2026-02-27 00:38:18.883647 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:38:18.883656 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:38:18.883665 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:38:18.883674 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:38:18.883684 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:38:18.883692 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:38:18.883701 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:38:18.883710 | orchestrator | 2026-02-27 00:38:18.883720 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 00:38:18.883730 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-27 00:38:18.883740 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-27 00:38:18.883756 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-27 00:38:18.883765 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-27 00:38:18.883775 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-27 00:38:18.883783 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-27 00:38:18.883792 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-27 00:38:18.883801 | orchestrator | 2026-02-27 00:38:18.883810 | orchestrator | 2026-02-27 00:38:18.883819 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 00:38:18.883828 | orchestrator | Friday 27 February 2026 00:38:18 +0000 (0:00:00.720) 0:00:48.034 ******* 2026-02-27 00:38:18.883842 | orchestrator | =============================================================================== 2026-02-27 00:38:18.883850 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 6.10s 2026-02-27 00:38:18.883858 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 6.00s 2026-02-27 00:38:18.883896 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.94s 2026-02-27 00:38:18.883906 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.31s 2026-02-27 00:38:18.883912 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.12s 2026-02-27 00:38:18.883919 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.04s 2026-02-27 00:38:18.883926 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.82s 2026-02-27 00:38:18.883932 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.75s 2026-02-27 00:38:18.883939 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.63s 2026-02-27 00:38:18.883945 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.60s 2026-02-27 00:38:18.883952 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.57s 2026-02-27 00:38:18.883958 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.36s 2026-02-27 00:38:18.883965 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.30s 2026-02-27 00:38:18.883971 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.29s 2026-02-27 00:38:18.883978 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.28s 2026-02-27 00:38:18.883984 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.21s 2026-02-27 00:38:18.883991 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.14s 2026-02-27 00:38:18.883997 | orchestrator | osism.commons.network : Create required directories --------------------- 1.01s 2026-02-27 00:38:18.884004 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.95s 2026-02-27 00:38:18.884011 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.95s 2026-02-27 00:38:19.189952 | orchestrator | + osism apply wireguard 2026-02-27 00:38:31.246135 | orchestrator | 2026-02-27 00:38:31 | INFO  | Task bb3899e8-9fc6-45af-a59f-f249f24fd716 (wireguard) was prepared for execution. 2026-02-27 00:38:31.246278 | orchestrator | 2026-02-27 00:38:31 | INFO  | It takes a moment until task bb3899e8-9fc6-45af-a59f-f249f24fd716 (wireguard) has been started and output is visible here. 2026-02-27 00:38:52.177079 | orchestrator | 2026-02-27 00:38:52.177210 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-02-27 00:38:52.177226 | orchestrator | 2026-02-27 00:38:52.177237 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-02-27 00:38:52.177247 | orchestrator | Friday 27 February 2026 00:38:35 +0000 (0:00:00.228) 0:00:00.228 ******* 2026-02-27 00:38:52.177257 | orchestrator | ok: [testbed-manager] 2026-02-27 00:38:52.177270 | orchestrator | 2026-02-27 00:38:52.177279 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-02-27 00:38:52.177289 | orchestrator | Friday 27 February 2026 00:38:37 +0000 (0:00:01.555) 0:00:01.784 ******* 2026-02-27 00:38:52.177299 | orchestrator | changed: [testbed-manager] 2026-02-27 00:38:52.177313 | orchestrator | 2026-02-27 00:38:52.177323 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-02-27 00:38:52.177333 | orchestrator | Friday 27 February 2026 00:38:44 +0000 (0:00:06.688) 0:00:08.472 ******* 2026-02-27 00:38:52.177342 | orchestrator | changed: [testbed-manager] 2026-02-27 00:38:52.177352 | orchestrator | 2026-02-27 00:38:52.177361 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-02-27 00:38:52.177371 | orchestrator | Friday 27 February 2026 00:38:44 +0000 (0:00:00.572) 0:00:09.044 ******* 2026-02-27 00:38:52.177380 | orchestrator | changed: [testbed-manager] 2026-02-27 00:38:52.177390 | orchestrator | 2026-02-27 00:38:52.177399 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-02-27 00:38:52.177409 | orchestrator | Friday 27 February 2026 00:38:45 +0000 (0:00:00.452) 0:00:09.497 ******* 2026-02-27 00:38:52.177418 | orchestrator | ok: [testbed-manager] 2026-02-27 00:38:52.177428 | orchestrator | 2026-02-27 00:38:52.177437 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-02-27 00:38:52.177448 | orchestrator | Friday 27 February 2026 00:38:45 +0000 (0:00:00.711) 0:00:10.208 ******* 2026-02-27 00:38:52.177465 | orchestrator | ok: [testbed-manager] 2026-02-27 00:38:52.177489 | orchestrator | 2026-02-27 00:38:52.177508 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-02-27 00:38:52.177524 | orchestrator | Friday 27 February 2026 00:38:46 +0000 (0:00:00.457) 0:00:10.666 ******* 2026-02-27 00:38:52.177540 | orchestrator | ok: [testbed-manager] 2026-02-27 00:38:52.177557 | orchestrator | 2026-02-27 00:38:52.177571 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-02-27 00:38:52.177588 | orchestrator | Friday 27 February 2026 00:38:46 +0000 (0:00:00.430) 0:00:11.097 ******* 2026-02-27 00:38:52.177603 | orchestrator | changed: [testbed-manager] 2026-02-27 00:38:52.177619 | orchestrator | 2026-02-27 00:38:52.177637 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-02-27 00:38:52.177654 | orchestrator | Friday 27 February 2026 00:38:47 +0000 (0:00:01.264) 0:00:12.361 ******* 2026-02-27 00:38:52.177672 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-27 00:38:52.177690 | orchestrator | changed: [testbed-manager] 2026-02-27 00:38:52.177707 | orchestrator | 2026-02-27 00:38:52.177725 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-02-27 00:38:52.177742 | orchestrator | Friday 27 February 2026 00:38:48 +0000 (0:00:00.994) 0:00:13.356 ******* 2026-02-27 00:38:52.177760 | orchestrator | changed: [testbed-manager] 2026-02-27 00:38:52.177778 | orchestrator | 2026-02-27 00:38:52.177796 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-02-27 00:38:52.177813 | orchestrator | Friday 27 February 2026 00:38:50 +0000 (0:00:01.827) 0:00:15.183 ******* 2026-02-27 00:38:52.177823 | orchestrator | changed: [testbed-manager] 2026-02-27 00:38:52.177833 | orchestrator | 2026-02-27 00:38:52.177842 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 00:38:52.177885 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 00:38:52.177896 | orchestrator | 2026-02-27 00:38:52.177905 | orchestrator | 2026-02-27 00:38:52.177915 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 00:38:52.177937 | orchestrator | Friday 27 February 2026 00:38:51 +0000 (0:00:00.980) 0:00:16.163 ******* 2026-02-27 00:38:52.177947 | orchestrator | =============================================================================== 2026-02-27 00:38:52.177956 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.69s 2026-02-27 00:38:52.177966 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.83s 2026-02-27 00:38:52.177976 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.56s 2026-02-27 00:38:52.177985 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.26s 2026-02-27 00:38:52.177995 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.99s 2026-02-27 00:38:52.178004 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.98s 2026-02-27 00:38:52.178014 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.71s 2026-02-27 00:38:52.178086 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.57s 2026-02-27 00:38:52.178096 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.46s 2026-02-27 00:38:52.178105 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.45s 2026-02-27 00:38:52.178115 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.43s 2026-02-27 00:38:52.548542 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-02-27 00:38:52.586109 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-02-27 00:38:52.586221 | orchestrator | Dload Upload Total Spent Left Speed 2026-02-27 00:38:52.667619 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 183 0 --:--:-- --:--:-- --:--:-- 185 2026-02-27 00:38:52.684698 | orchestrator | + osism apply --environment custom workarounds 2026-02-27 00:38:54.726466 | orchestrator | 2026-02-27 00:38:54 | INFO  | Trying to run play workarounds in environment custom 2026-02-27 00:39:04.814373 | orchestrator | 2026-02-27 00:39:04 | INFO  | Task 9a3d8677-4b03-420a-9faf-f5f0bfe39b83 (workarounds) was prepared for execution. 2026-02-27 00:39:04.814501 | orchestrator | 2026-02-27 00:39:04 | INFO  | It takes a moment until task 9a3d8677-4b03-420a-9faf-f5f0bfe39b83 (workarounds) has been started and output is visible here. 2026-02-27 00:39:29.851407 | orchestrator | 2026-02-27 00:39:29.851518 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-27 00:39:29.851536 | orchestrator | 2026-02-27 00:39:29.851548 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-02-27 00:39:29.851559 | orchestrator | Friday 27 February 2026 00:39:09 +0000 (0:00:00.132) 0:00:00.132 ******* 2026-02-27 00:39:29.851571 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-02-27 00:39:29.851582 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-02-27 00:39:29.851593 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-02-27 00:39:29.851604 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-02-27 00:39:29.851615 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-02-27 00:39:29.851626 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-02-27 00:39:29.851636 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-02-27 00:39:29.851647 | orchestrator | 2026-02-27 00:39:29.851658 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-02-27 00:39:29.851669 | orchestrator | 2026-02-27 00:39:29.851680 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-02-27 00:39:29.851691 | orchestrator | Friday 27 February 2026 00:39:09 +0000 (0:00:00.829) 0:00:00.962 ******* 2026-02-27 00:39:29.851702 | orchestrator | ok: [testbed-manager] 2026-02-27 00:39:29.851741 | orchestrator | 2026-02-27 00:39:29.851753 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-02-27 00:39:29.851764 | orchestrator | 2026-02-27 00:39:29.851775 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-02-27 00:39:29.851786 | orchestrator | Friday 27 February 2026 00:39:12 +0000 (0:00:02.323) 0:00:03.286 ******* 2026-02-27 00:39:29.851797 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:39:29.851808 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:39:29.851819 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:39:29.851874 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:39:29.851885 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:39:29.851896 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:39:29.851907 | orchestrator | 2026-02-27 00:39:29.851918 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-02-27 00:39:29.851928 | orchestrator | 2026-02-27 00:39:29.851940 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-02-27 00:39:29.851967 | orchestrator | Friday 27 February 2026 00:39:13 +0000 (0:00:01.740) 0:00:05.027 ******* 2026-02-27 00:39:29.851981 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-27 00:39:29.851994 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-27 00:39:29.852006 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-27 00:39:29.852018 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-27 00:39:29.852030 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-27 00:39:29.852042 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-27 00:39:29.852055 | orchestrator | 2026-02-27 00:39:29.852067 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-02-27 00:39:29.852080 | orchestrator | Friday 27 February 2026 00:39:15 +0000 (0:00:01.460) 0:00:06.488 ******* 2026-02-27 00:39:29.852092 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:39:29.852105 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:39:29.852118 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:39:29.852130 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:39:29.852142 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:39:29.852154 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:39:29.852166 | orchestrator | 2026-02-27 00:39:29.852178 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-02-27 00:39:29.852191 | orchestrator | Friday 27 February 2026 00:39:19 +0000 (0:00:03.849) 0:00:10.337 ******* 2026-02-27 00:39:29.852203 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:39:29.852216 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:39:29.852229 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:39:29.852241 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:39:29.852254 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:39:29.852266 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:39:29.852278 | orchestrator | 2026-02-27 00:39:29.852290 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-02-27 00:39:29.852301 | orchestrator | 2026-02-27 00:39:29.852312 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-02-27 00:39:29.852323 | orchestrator | Friday 27 February 2026 00:39:19 +0000 (0:00:00.722) 0:00:11.060 ******* 2026-02-27 00:39:29.852333 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:39:29.852344 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:39:29.852355 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:39:29.852366 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:39:29.852376 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:39:29.852387 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:39:29.852407 | orchestrator | changed: [testbed-manager] 2026-02-27 00:39:29.852418 | orchestrator | 2026-02-27 00:39:29.852429 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-02-27 00:39:29.852440 | orchestrator | Friday 27 February 2026 00:39:21 +0000 (0:00:01.527) 0:00:12.588 ******* 2026-02-27 00:39:29.852451 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:39:29.852462 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:39:29.852472 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:39:29.852483 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:39:29.852494 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:39:29.852505 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:39:29.852534 | orchestrator | changed: [testbed-manager] 2026-02-27 00:39:29.852546 | orchestrator | 2026-02-27 00:39:29.852557 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-02-27 00:39:29.852568 | orchestrator | Friday 27 February 2026 00:39:23 +0000 (0:00:01.561) 0:00:14.150 ******* 2026-02-27 00:39:29.852579 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:39:29.852589 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:39:29.852600 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:39:29.852611 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:39:29.852622 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:39:29.852633 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:39:29.852644 | orchestrator | ok: [testbed-manager] 2026-02-27 00:39:29.852654 | orchestrator | 2026-02-27 00:39:29.852665 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-02-27 00:39:29.852676 | orchestrator | Friday 27 February 2026 00:39:24 +0000 (0:00:01.517) 0:00:15.667 ******* 2026-02-27 00:39:29.852687 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:39:29.852698 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:39:29.852708 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:39:29.852719 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:39:29.852730 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:39:29.852741 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:39:29.852751 | orchestrator | changed: [testbed-manager] 2026-02-27 00:39:29.852762 | orchestrator | 2026-02-27 00:39:29.852773 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-02-27 00:39:29.852784 | orchestrator | Friday 27 February 2026 00:39:26 +0000 (0:00:01.791) 0:00:17.459 ******* 2026-02-27 00:39:29.852794 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:39:29.852805 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:39:29.852816 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:39:29.852845 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:39:29.852856 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:39:29.852867 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:39:29.852879 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:39:29.852889 | orchestrator | 2026-02-27 00:39:29.852901 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-02-27 00:39:29.852911 | orchestrator | 2026-02-27 00:39:29.852922 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-02-27 00:39:29.852933 | orchestrator | Friday 27 February 2026 00:39:27 +0000 (0:00:00.662) 0:00:18.122 ******* 2026-02-27 00:39:29.852944 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:39:29.852955 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:39:29.852966 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:39:29.852977 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:39:29.852988 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:39:29.853004 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:39:29.853015 | orchestrator | ok: [testbed-manager] 2026-02-27 00:39:29.853026 | orchestrator | 2026-02-27 00:39:29.853037 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 00:39:29.853049 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-27 00:39:29.853061 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-27 00:39:29.853079 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-27 00:39:29.853090 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-27 00:39:29.853101 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-27 00:39:29.853112 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-27 00:39:29.853123 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-27 00:39:29.853134 | orchestrator | 2026-02-27 00:39:29.853145 | orchestrator | 2026-02-27 00:39:29.853156 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 00:39:29.853167 | orchestrator | Friday 27 February 2026 00:39:29 +0000 (0:00:02.800) 0:00:20.922 ******* 2026-02-27 00:39:29.853178 | orchestrator | =============================================================================== 2026-02-27 00:39:29.853189 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.85s 2026-02-27 00:39:29.853199 | orchestrator | Install python3-docker -------------------------------------------------- 2.80s 2026-02-27 00:39:29.853210 | orchestrator | Apply netplan configuration --------------------------------------------- 2.32s 2026-02-27 00:39:29.853221 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.79s 2026-02-27 00:39:29.853232 | orchestrator | Apply netplan configuration --------------------------------------------- 1.74s 2026-02-27 00:39:29.853244 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.56s 2026-02-27 00:39:29.853255 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.53s 2026-02-27 00:39:29.853265 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.52s 2026-02-27 00:39:29.853276 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.46s 2026-02-27 00:39:29.853287 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.83s 2026-02-27 00:39:29.853298 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.72s 2026-02-27 00:39:29.853315 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.66s 2026-02-27 00:39:30.548619 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-02-27 00:39:42.467307 | orchestrator | 2026-02-27 00:39:42 | INFO  | Task 8e5440f6-081d-4617-a2b6-74571681739d (reboot) was prepared for execution. 2026-02-27 00:39:42.467405 | orchestrator | 2026-02-27 00:39:42 | INFO  | It takes a moment until task 8e5440f6-081d-4617-a2b6-74571681739d (reboot) has been started and output is visible here. 2026-02-27 00:39:52.952297 | orchestrator | 2026-02-27 00:39:52.952421 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-27 00:39:52.952443 | orchestrator | 2026-02-27 00:39:52.952458 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-27 00:39:52.952473 | orchestrator | Friday 27 February 2026 00:39:46 +0000 (0:00:00.207) 0:00:00.207 ******* 2026-02-27 00:39:52.952488 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:39:52.952505 | orchestrator | 2026-02-27 00:39:52.952519 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-27 00:39:52.952534 | orchestrator | Friday 27 February 2026 00:39:46 +0000 (0:00:00.119) 0:00:00.326 ******* 2026-02-27 00:39:52.952549 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:39:52.952563 | orchestrator | 2026-02-27 00:39:52.952577 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-27 00:39:52.952619 | orchestrator | Friday 27 February 2026 00:39:47 +0000 (0:00:00.935) 0:00:01.262 ******* 2026-02-27 00:39:52.952633 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:39:52.952647 | orchestrator | 2026-02-27 00:39:52.952661 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-27 00:39:52.952675 | orchestrator | 2026-02-27 00:39:52.952689 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-27 00:39:52.952704 | orchestrator | Friday 27 February 2026 00:39:48 +0000 (0:00:00.122) 0:00:01.384 ******* 2026-02-27 00:39:52.952717 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:39:52.952731 | orchestrator | 2026-02-27 00:39:52.952746 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-27 00:39:52.952760 | orchestrator | Friday 27 February 2026 00:39:48 +0000 (0:00:00.108) 0:00:01.493 ******* 2026-02-27 00:39:52.952774 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:39:52.952788 | orchestrator | 2026-02-27 00:39:52.952803 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-27 00:39:52.952895 | orchestrator | Friday 27 February 2026 00:39:48 +0000 (0:00:00.701) 0:00:02.194 ******* 2026-02-27 00:39:52.952915 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:39:52.952931 | orchestrator | 2026-02-27 00:39:52.952947 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-27 00:39:52.952962 | orchestrator | 2026-02-27 00:39:52.952977 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-27 00:39:52.952993 | orchestrator | Friday 27 February 2026 00:39:48 +0000 (0:00:00.128) 0:00:02.323 ******* 2026-02-27 00:39:52.953009 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:39:52.953024 | orchestrator | 2026-02-27 00:39:52.953038 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-27 00:39:52.953055 | orchestrator | Friday 27 February 2026 00:39:49 +0000 (0:00:00.208) 0:00:02.531 ******* 2026-02-27 00:39:52.953069 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:39:52.953086 | orchestrator | 2026-02-27 00:39:52.953103 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-27 00:39:52.953119 | orchestrator | Friday 27 February 2026 00:39:49 +0000 (0:00:00.704) 0:00:03.236 ******* 2026-02-27 00:39:52.953135 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:39:52.953147 | orchestrator | 2026-02-27 00:39:52.953158 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-27 00:39:52.953168 | orchestrator | 2026-02-27 00:39:52.953178 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-27 00:39:52.953187 | orchestrator | Friday 27 February 2026 00:39:50 +0000 (0:00:00.129) 0:00:03.366 ******* 2026-02-27 00:39:52.953195 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:39:52.953204 | orchestrator | 2026-02-27 00:39:52.953213 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-27 00:39:52.953221 | orchestrator | Friday 27 February 2026 00:39:50 +0000 (0:00:00.127) 0:00:03.494 ******* 2026-02-27 00:39:52.953230 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:39:52.953239 | orchestrator | 2026-02-27 00:39:52.953247 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-27 00:39:52.953256 | orchestrator | Friday 27 February 2026 00:39:50 +0000 (0:00:00.664) 0:00:04.158 ******* 2026-02-27 00:39:52.953264 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:39:52.953273 | orchestrator | 2026-02-27 00:39:52.953281 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-27 00:39:52.953290 | orchestrator | 2026-02-27 00:39:52.953298 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-27 00:39:52.953307 | orchestrator | Friday 27 February 2026 00:39:50 +0000 (0:00:00.111) 0:00:04.270 ******* 2026-02-27 00:39:52.953315 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:39:52.953324 | orchestrator | 2026-02-27 00:39:52.953332 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-27 00:39:52.953354 | orchestrator | Friday 27 February 2026 00:39:51 +0000 (0:00:00.111) 0:00:04.381 ******* 2026-02-27 00:39:52.953363 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:39:52.953372 | orchestrator | 2026-02-27 00:39:52.953380 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-27 00:39:52.953388 | orchestrator | Friday 27 February 2026 00:39:51 +0000 (0:00:00.657) 0:00:05.038 ******* 2026-02-27 00:39:52.953397 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:39:52.953406 | orchestrator | 2026-02-27 00:39:52.953415 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-27 00:39:52.953423 | orchestrator | 2026-02-27 00:39:52.953432 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-27 00:39:52.953440 | orchestrator | Friday 27 February 2026 00:39:51 +0000 (0:00:00.111) 0:00:05.150 ******* 2026-02-27 00:39:52.953449 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:39:52.953458 | orchestrator | 2026-02-27 00:39:52.953466 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-27 00:39:52.953475 | orchestrator | Friday 27 February 2026 00:39:51 +0000 (0:00:00.106) 0:00:05.257 ******* 2026-02-27 00:39:52.953483 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:39:52.953492 | orchestrator | 2026-02-27 00:39:52.953507 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-27 00:39:52.953521 | orchestrator | Friday 27 February 2026 00:39:52 +0000 (0:00:00.687) 0:00:05.944 ******* 2026-02-27 00:39:52.953562 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:39:52.953577 | orchestrator | 2026-02-27 00:39:52.953591 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 00:39:52.953606 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-27 00:39:52.953622 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-27 00:39:52.953637 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-27 00:39:52.953650 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-27 00:39:52.953662 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-27 00:39:52.953675 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-27 00:39:52.953688 | orchestrator | 2026-02-27 00:39:52.953702 | orchestrator | 2026-02-27 00:39:52.953716 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 00:39:52.953729 | orchestrator | Friday 27 February 2026 00:39:52 +0000 (0:00:00.031) 0:00:05.976 ******* 2026-02-27 00:39:52.953754 | orchestrator | =============================================================================== 2026-02-27 00:39:52.953768 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.35s 2026-02-27 00:39:52.953782 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.78s 2026-02-27 00:39:52.953797 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.63s 2026-02-27 00:39:53.293397 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-02-27 00:40:05.406677 | orchestrator | 2026-02-27 00:40:05 | INFO  | Task 3af4d88a-64b2-46d8-b966-705ec19fed72 (wait-for-connection) was prepared for execution. 2026-02-27 00:40:05.406786 | orchestrator | 2026-02-27 00:40:05 | INFO  | It takes a moment until task 3af4d88a-64b2-46d8-b966-705ec19fed72 (wait-for-connection) has been started and output is visible here. 2026-02-27 00:40:21.650356 | orchestrator | 2026-02-27 00:40:21.650505 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-02-27 00:40:21.650523 | orchestrator | 2026-02-27 00:40:21.650535 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-02-27 00:40:21.650547 | orchestrator | Friday 27 February 2026 00:40:09 +0000 (0:00:00.250) 0:00:00.250 ******* 2026-02-27 00:40:21.650558 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:40:21.650572 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:40:21.651481 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:40:21.651503 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:40:21.651515 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:40:21.651527 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:40:21.651538 | orchestrator | 2026-02-27 00:40:21.651550 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 00:40:21.651562 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 00:40:21.651575 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 00:40:21.651587 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 00:40:21.651598 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 00:40:21.651610 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 00:40:21.651621 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 00:40:21.651632 | orchestrator | 2026-02-27 00:40:21.651643 | orchestrator | 2026-02-27 00:40:21.651654 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 00:40:21.651665 | orchestrator | Friday 27 February 2026 00:40:21 +0000 (0:00:11.549) 0:00:11.800 ******* 2026-02-27 00:40:21.651676 | orchestrator | =============================================================================== 2026-02-27 00:40:21.651687 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.55s 2026-02-27 00:40:21.972972 | orchestrator | + osism apply hddtemp 2026-02-27 00:40:34.192521 | orchestrator | 2026-02-27 00:40:34 | INFO  | Task 3d122b99-b8ff-44fe-a2f4-827f9da2d747 (hddtemp) was prepared for execution. 2026-02-27 00:40:34.192610 | orchestrator | 2026-02-27 00:40:34 | INFO  | It takes a moment until task 3d122b99-b8ff-44fe-a2f4-827f9da2d747 (hddtemp) has been started and output is visible here. 2026-02-27 00:41:12.086765 | orchestrator | 2026-02-27 00:41:12.086903 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-02-27 00:41:12.086920 | orchestrator | 2026-02-27 00:41:12.086932 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-02-27 00:41:12.086944 | orchestrator | Friday 27 February 2026 00:40:38 +0000 (0:00:00.269) 0:00:00.269 ******* 2026-02-27 00:41:12.086955 | orchestrator | ok: [testbed-manager] 2026-02-27 00:41:12.086968 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:41:12.086979 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:41:12.086991 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:41:12.087002 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:41:12.087013 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:41:12.087024 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:41:12.087035 | orchestrator | 2026-02-27 00:41:12.087046 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-02-27 00:41:12.087058 | orchestrator | Friday 27 February 2026 00:40:39 +0000 (0:00:00.734) 0:00:01.003 ******* 2026-02-27 00:41:12.087070 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-27 00:41:12.087109 | orchestrator | 2026-02-27 00:41:12.087121 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-02-27 00:41:12.087133 | orchestrator | Friday 27 February 2026 00:40:40 +0000 (0:00:01.275) 0:00:02.279 ******* 2026-02-27 00:41:12.087143 | orchestrator | ok: [testbed-manager] 2026-02-27 00:41:12.087154 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:41:12.087165 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:41:12.087176 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:41:12.087187 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:41:12.087199 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:41:12.087209 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:41:12.087220 | orchestrator | 2026-02-27 00:41:12.087232 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-02-27 00:41:12.087269 | orchestrator | Friday 27 February 2026 00:40:42 +0000 (0:00:02.310) 0:00:04.589 ******* 2026-02-27 00:41:12.087281 | orchestrator | changed: [testbed-manager] 2026-02-27 00:41:12.087293 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:41:12.087306 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:41:12.087318 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:41:12.087332 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:41:12.087344 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:41:12.087356 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:41:12.087370 | orchestrator | 2026-02-27 00:41:12.087382 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-02-27 00:41:12.087395 | orchestrator | Friday 27 February 2026 00:40:44 +0000 (0:00:01.180) 0:00:05.769 ******* 2026-02-27 00:41:12.087408 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:41:12.087420 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:41:12.087432 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:41:12.087445 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:41:12.087458 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:41:12.087471 | orchestrator | ok: [testbed-manager] 2026-02-27 00:41:12.087483 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:41:12.087495 | orchestrator | 2026-02-27 00:41:12.087508 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-02-27 00:41:12.087521 | orchestrator | Friday 27 February 2026 00:40:45 +0000 (0:00:01.336) 0:00:07.106 ******* 2026-02-27 00:41:12.087534 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:41:12.087546 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:41:12.087558 | orchestrator | changed: [testbed-manager] 2026-02-27 00:41:12.087571 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:41:12.087584 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:41:12.087596 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:41:12.087609 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:41:12.087621 | orchestrator | 2026-02-27 00:41:12.087634 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-02-27 00:41:12.087647 | orchestrator | Friday 27 February 2026 00:40:46 +0000 (0:00:00.915) 0:00:08.022 ******* 2026-02-27 00:41:12.087659 | orchestrator | changed: [testbed-manager] 2026-02-27 00:41:12.087670 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:41:12.087681 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:41:12.087692 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:41:12.087702 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:41:12.087713 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:41:12.087724 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:41:12.087735 | orchestrator | 2026-02-27 00:41:12.087746 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-02-27 00:41:12.087756 | orchestrator | Friday 27 February 2026 00:41:07 +0000 (0:00:21.493) 0:00:29.515 ******* 2026-02-27 00:41:12.087767 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-27 00:41:12.087808 | orchestrator | 2026-02-27 00:41:12.087820 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-02-27 00:41:12.087831 | orchestrator | Friday 27 February 2026 00:41:09 +0000 (0:00:01.281) 0:00:30.796 ******* 2026-02-27 00:41:12.087842 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:41:12.087852 | orchestrator | changed: [testbed-manager] 2026-02-27 00:41:12.087864 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:41:12.087874 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:41:12.087885 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:41:12.087896 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:41:12.087907 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:41:12.087918 | orchestrator | 2026-02-27 00:41:12.087929 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 00:41:12.087940 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 00:41:12.087969 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-27 00:41:12.087980 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-27 00:41:12.087992 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-27 00:41:12.088003 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-27 00:41:12.088014 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-27 00:41:12.088024 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-27 00:41:12.088035 | orchestrator | 2026-02-27 00:41:12.088046 | orchestrator | 2026-02-27 00:41:12.088057 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 00:41:12.088068 | orchestrator | Friday 27 February 2026 00:41:11 +0000 (0:00:02.526) 0:00:33.323 ******* 2026-02-27 00:41:12.088079 | orchestrator | =============================================================================== 2026-02-27 00:41:12.088090 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 21.49s 2026-02-27 00:41:12.088101 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 2.53s 2026-02-27 00:41:12.088111 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.31s 2026-02-27 00:41:12.088127 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.34s 2026-02-27 00:41:12.088138 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.28s 2026-02-27 00:41:12.088149 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.28s 2026-02-27 00:41:12.088160 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.18s 2026-02-27 00:41:12.088171 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.92s 2026-02-27 00:41:12.088182 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.73s 2026-02-27 00:41:12.430091 | orchestrator | ++ semver 9.5.0 7.1.1 2026-02-27 00:41:12.494697 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-27 00:41:12.494845 | orchestrator | + sudo systemctl restart manager.service 2026-02-27 00:41:26.121265 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-27 00:41:26.121404 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-02-27 00:41:26.121433 | orchestrator | + local max_attempts=60 2026-02-27 00:41:26.121454 | orchestrator | + local name=ceph-ansible 2026-02-27 00:41:26.121474 | orchestrator | + local attempt_num=1 2026-02-27 00:41:26.121493 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-27 00:41:26.160451 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-27 00:41:26.160552 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-27 00:41:26.160572 | orchestrator | + sleep 5 2026-02-27 00:41:31.164929 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-27 00:41:31.191980 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-27 00:41:31.192052 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-27 00:41:31.192063 | orchestrator | + sleep 5 2026-02-27 00:41:36.195665 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-27 00:41:36.237018 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-27 00:41:36.237108 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-27 00:41:36.237123 | orchestrator | + sleep 5 2026-02-27 00:41:41.241695 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-27 00:41:41.285568 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-27 00:41:41.285646 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-27 00:41:41.285656 | orchestrator | + sleep 5 2026-02-27 00:41:46.288935 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-27 00:41:46.323909 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-27 00:41:46.323993 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-27 00:41:46.324006 | orchestrator | + sleep 5 2026-02-27 00:41:51.329523 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-27 00:41:51.364149 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-27 00:41:51.364233 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-27 00:41:51.364241 | orchestrator | + sleep 5 2026-02-27 00:41:56.369682 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-27 00:41:56.412686 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-27 00:41:56.412799 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-27 00:41:56.412819 | orchestrator | + sleep 5 2026-02-27 00:42:01.420069 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-27 00:42:01.456875 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-27 00:42:01.456983 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-27 00:42:01.457010 | orchestrator | + sleep 5 2026-02-27 00:42:06.458369 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-27 00:42:06.508585 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-27 00:42:06.508710 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-27 00:42:06.508736 | orchestrator | + sleep 5 2026-02-27 00:42:11.512347 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-27 00:42:11.555772 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-27 00:42:11.555852 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-27 00:42:11.555866 | orchestrator | + sleep 5 2026-02-27 00:42:16.560346 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-27 00:42:16.602788 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-27 00:42:16.603039 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-27 00:42:16.603159 | orchestrator | + sleep 5 2026-02-27 00:42:21.606903 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-27 00:42:21.648358 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-27 00:42:21.648464 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-27 00:42:21.648484 | orchestrator | + sleep 5 2026-02-27 00:42:26.653113 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-27 00:42:26.695174 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-27 00:42:26.695278 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-27 00:42:26.695295 | orchestrator | + sleep 5 2026-02-27 00:42:31.701328 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-27 00:42:31.746919 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-27 00:42:31.747041 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-27 00:42:31.747060 | orchestrator | + local max_attempts=60 2026-02-27 00:42:31.747073 | orchestrator | + local name=kolla-ansible 2026-02-27 00:42:31.747085 | orchestrator | + local attempt_num=1 2026-02-27 00:42:31.747580 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-27 00:42:31.785980 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-27 00:42:31.786115 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-27 00:42:31.786159 | orchestrator | + local max_attempts=60 2026-02-27 00:42:31.786171 | orchestrator | + local name=osism-ansible 2026-02-27 00:42:31.786181 | orchestrator | + local attempt_num=1 2026-02-27 00:42:31.786539 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-27 00:42:31.826253 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-27 00:42:31.826381 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-27 00:42:31.826409 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-02-27 00:42:31.993131 | orchestrator | ARA in ceph-ansible already disabled. 2026-02-27 00:42:32.161217 | orchestrator | ARA in kolla-ansible already disabled. 2026-02-27 00:42:32.337289 | orchestrator | ARA in osism-ansible already disabled. 2026-02-27 00:42:32.503079 | orchestrator | ARA in osism-kubernetes already disabled. 2026-02-27 00:42:32.503346 | orchestrator | + osism apply gather-facts 2026-02-27 00:42:44.901395 | orchestrator | 2026-02-27 00:42:44 | INFO  | Task e2a7a4b9-3fd7-4ce4-9870-5f621db4c47a (gather-facts) was prepared for execution. 2026-02-27 00:42:44.901520 | orchestrator | 2026-02-27 00:42:44 | INFO  | It takes a moment until task e2a7a4b9-3fd7-4ce4-9870-5f621db4c47a (gather-facts) has been started and output is visible here. 2026-02-27 00:42:58.262782 | orchestrator | 2026-02-27 00:42:58.262924 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-27 00:42:58.262954 | orchestrator | 2026-02-27 00:42:58.262976 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-27 00:42:58.262998 | orchestrator | Friday 27 February 2026 00:42:49 +0000 (0:00:00.233) 0:00:00.233 ******* 2026-02-27 00:42:58.263020 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:42:58.263043 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:42:58.263064 | orchestrator | ok: [testbed-manager] 2026-02-27 00:42:58.263084 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:42:58.263104 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:42:58.263125 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:42:58.263146 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:42:58.263167 | orchestrator | 2026-02-27 00:42:58.263188 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-27 00:42:58.263207 | orchestrator | 2026-02-27 00:42:58.263227 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-27 00:42:58.263249 | orchestrator | Friday 27 February 2026 00:42:57 +0000 (0:00:07.996) 0:00:08.229 ******* 2026-02-27 00:42:58.263272 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:42:58.263294 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:42:58.263315 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:42:58.263337 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:42:58.263357 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:42:58.263378 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:42:58.263398 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:42:58.263417 | orchestrator | 2026-02-27 00:42:58.263438 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 00:42:58.263459 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-27 00:42:58.263482 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-27 00:42:58.263503 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-27 00:42:58.263525 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-27 00:42:58.263547 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-27 00:42:58.263568 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-27 00:42:58.263625 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-27 00:42:58.263646 | orchestrator | 2026-02-27 00:42:58.263664 | orchestrator | 2026-02-27 00:42:58.263683 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 00:42:58.263703 | orchestrator | Friday 27 February 2026 00:42:57 +0000 (0:00:00.572) 0:00:08.802 ******* 2026-02-27 00:42:58.263800 | orchestrator | =============================================================================== 2026-02-27 00:42:58.263825 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.00s 2026-02-27 00:42:58.263846 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.57s 2026-02-27 00:42:58.604816 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-02-27 00:42:58.618921 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-02-27 00:42:58.639774 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-02-27 00:42:58.659940 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-02-27 00:42:58.681013 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-02-27 00:42:58.706274 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-02-27 00:42:58.727587 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-02-27 00:42:58.743740 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-02-27 00:42:58.760673 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-02-27 00:42:58.774327 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-02-27 00:42:58.787066 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-02-27 00:42:58.799206 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-02-27 00:42:58.812782 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-02-27 00:42:58.834876 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-02-27 00:42:58.852077 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-02-27 00:42:58.866631 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-02-27 00:42:58.882079 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-02-27 00:42:58.898058 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-02-27 00:42:58.914484 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-02-27 00:42:58.931017 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-02-27 00:42:58.945419 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-02-27 00:42:58.959514 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-02-27 00:42:58.974488 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-02-27 00:42:58.994288 | orchestrator | + [[ false == \t\r\u\e ]] 2026-02-27 00:42:59.380484 | orchestrator | ok: Runtime: 0:25:22.146823 2026-02-27 00:42:59.480903 | 2026-02-27 00:42:59.481070 | TASK [Deploy services] 2026-02-27 00:43:00.013790 | orchestrator | skipping: Conditional result was False 2026-02-27 00:43:00.031482 | 2026-02-27 00:43:00.031710 | TASK [Deploy in a nutshell] 2026-02-27 00:43:00.718857 | orchestrator | + set -e 2026-02-27 00:43:00.719046 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-27 00:43:00.719081 | orchestrator | ++ export INTERACTIVE=false 2026-02-27 00:43:00.719102 | orchestrator | ++ INTERACTIVE=false 2026-02-27 00:43:00.719116 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-27 00:43:00.719128 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-27 00:43:00.719142 | orchestrator | + source /opt/manager-vars.sh 2026-02-27 00:43:00.719183 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-27 00:43:00.720274 | orchestrator | 2026-02-27 00:43:00.720320 | orchestrator | # PULL IMAGES 2026-02-27 00:43:00.720341 | orchestrator | 2026-02-27 00:43:00.720362 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-27 00:43:00.720390 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-27 00:43:00.720410 | orchestrator | ++ CEPH_VERSION=reef 2026-02-27 00:43:00.720437 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-27 00:43:00.720456 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-27 00:43:00.720487 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-27 00:43:00.720506 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-27 00:43:00.720528 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-27 00:43:00.720548 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-27 00:43:00.720566 | orchestrator | ++ export ARA=false 2026-02-27 00:43:00.720586 | orchestrator | ++ ARA=false 2026-02-27 00:43:00.720650 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-27 00:43:00.720669 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-27 00:43:00.720686 | orchestrator | ++ export TEMPEST=true 2026-02-27 00:43:00.720702 | orchestrator | ++ TEMPEST=true 2026-02-27 00:43:00.720748 | orchestrator | ++ export IS_ZUUL=true 2026-02-27 00:43:00.720769 | orchestrator | ++ IS_ZUUL=true 2026-02-27 00:43:00.720789 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.197 2026-02-27 00:43:00.720810 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.197 2026-02-27 00:43:00.720828 | orchestrator | ++ export EXTERNAL_API=false 2026-02-27 00:43:00.720846 | orchestrator | ++ EXTERNAL_API=false 2026-02-27 00:43:00.720864 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-27 00:43:00.720885 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-27 00:43:00.720904 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-27 00:43:00.720921 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-27 00:43:00.720939 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-27 00:43:00.720969 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-27 00:43:00.720989 | orchestrator | + echo 2026-02-27 00:43:00.721007 | orchestrator | + echo '# PULL IMAGES' 2026-02-27 00:43:00.721026 | orchestrator | + echo 2026-02-27 00:43:00.721059 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-27 00:43:00.785036 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-27 00:43:00.785119 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-02-27 00:43:02.565797 | orchestrator | 2026-02-27 00:43:02 | INFO  | Trying to run play pull-images in environment custom 2026-02-27 00:43:12.765882 | orchestrator | 2026-02-27 00:43:12 | INFO  | Task ff5e8efb-ed3f-41d3-8a29-f782cde64792 (pull-images) was prepared for execution. 2026-02-27 00:43:12.766014 | orchestrator | 2026-02-27 00:43:12 | INFO  | Task ff5e8efb-ed3f-41d3-8a29-f782cde64792 is running in background. No more output. Check ARA for logs. 2026-02-27 00:43:14.851061 | orchestrator | 2026-02-27 00:43:14 | INFO  | Trying to run play wipe-partitions in environment custom 2026-02-27 00:43:25.091425 | orchestrator | 2026-02-27 00:43:25 | INFO  | Task b7287db4-922a-4676-8a3d-67e50ab4f6d3 (wipe-partitions) was prepared for execution. 2026-02-27 00:43:25.092936 | orchestrator | 2026-02-27 00:43:25 | INFO  | It takes a moment until task b7287db4-922a-4676-8a3d-67e50ab4f6d3 (wipe-partitions) has been started and output is visible here. 2026-02-27 00:43:37.929869 | orchestrator | 2026-02-27 00:43:37.930083 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-02-27 00:43:37.930122 | orchestrator | 2026-02-27 00:43:37.930144 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-02-27 00:43:37.930171 | orchestrator | Friday 27 February 2026 00:43:29 +0000 (0:00:00.149) 0:00:00.149 ******* 2026-02-27 00:43:37.930192 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:43:37.930215 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:43:37.930237 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:43:37.930256 | orchestrator | 2026-02-27 00:43:37.930277 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-02-27 00:43:37.930330 | orchestrator | Friday 27 February 2026 00:43:30 +0000 (0:00:00.590) 0:00:00.739 ******* 2026-02-27 00:43:37.930356 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:43:37.930383 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:43:37.930403 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:43:37.930451 | orchestrator | 2026-02-27 00:43:37.930476 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-02-27 00:43:37.930503 | orchestrator | Friday 27 February 2026 00:43:30 +0000 (0:00:00.333) 0:00:01.072 ******* 2026-02-27 00:43:37.930526 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:43:37.930566 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:43:37.930594 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:43:37.930615 | orchestrator | 2026-02-27 00:43:37.930637 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-02-27 00:43:37.930661 | orchestrator | Friday 27 February 2026 00:43:31 +0000 (0:00:00.553) 0:00:01.625 ******* 2026-02-27 00:43:37.930683 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:43:37.930731 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:43:37.930749 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:43:37.930769 | orchestrator | 2026-02-27 00:43:37.930789 | orchestrator | TASK [Check device availability] *********************************************** 2026-02-27 00:43:37.930809 | orchestrator | Friday 27 February 2026 00:43:31 +0000 (0:00:00.241) 0:00:01.867 ******* 2026-02-27 00:43:37.930829 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-02-27 00:43:37.930853 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-02-27 00:43:37.930875 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-02-27 00:43:37.930894 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-02-27 00:43:37.930912 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-02-27 00:43:37.930930 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-02-27 00:43:37.930951 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-02-27 00:43:37.930970 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-02-27 00:43:37.930989 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-02-27 00:43:37.931009 | orchestrator | 2026-02-27 00:43:37.931029 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-02-27 00:43:37.931050 | orchestrator | Friday 27 February 2026 00:43:32 +0000 (0:00:01.161) 0:00:03.028 ******* 2026-02-27 00:43:37.931071 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-02-27 00:43:37.931089 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-02-27 00:43:37.931109 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-02-27 00:43:37.931129 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-02-27 00:43:37.931149 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-02-27 00:43:37.931168 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-02-27 00:43:37.931187 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-02-27 00:43:37.931205 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-02-27 00:43:37.931224 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-02-27 00:43:37.931244 | orchestrator | 2026-02-27 00:43:37.931265 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-02-27 00:43:37.931285 | orchestrator | Friday 27 February 2026 00:43:34 +0000 (0:00:01.560) 0:00:04.589 ******* 2026-02-27 00:43:37.931304 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-02-27 00:43:37.931322 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-02-27 00:43:37.931341 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-02-27 00:43:37.931361 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-02-27 00:43:37.931381 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-02-27 00:43:37.931399 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-02-27 00:43:37.931418 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-02-27 00:43:37.931438 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-02-27 00:43:37.931480 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-02-27 00:43:37.931498 | orchestrator | 2026-02-27 00:43:37.931517 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-02-27 00:43:37.931533 | orchestrator | Friday 27 February 2026 00:43:36 +0000 (0:00:02.038) 0:00:06.627 ******* 2026-02-27 00:43:37.931550 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:43:37.931568 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:43:37.931586 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:43:37.931603 | orchestrator | 2026-02-27 00:43:37.931621 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-02-27 00:43:37.931641 | orchestrator | Friday 27 February 2026 00:43:36 +0000 (0:00:00.580) 0:00:07.208 ******* 2026-02-27 00:43:37.931660 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:43:37.931680 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:43:37.931728 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:43:37.931747 | orchestrator | 2026-02-27 00:43:37.931767 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 00:43:37.931787 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-27 00:43:37.931809 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-27 00:43:37.931859 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-27 00:43:37.931881 | orchestrator | 2026-02-27 00:43:37.931899 | orchestrator | 2026-02-27 00:43:37.931917 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 00:43:37.931935 | orchestrator | Friday 27 February 2026 00:43:37 +0000 (0:00:00.634) 0:00:07.842 ******* 2026-02-27 00:43:37.931954 | orchestrator | =============================================================================== 2026-02-27 00:43:37.931973 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.04s 2026-02-27 00:43:37.931991 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.56s 2026-02-27 00:43:37.932012 | orchestrator | Check device availability ----------------------------------------------- 1.16s 2026-02-27 00:43:37.932032 | orchestrator | Request device events from the kernel ----------------------------------- 0.63s 2026-02-27 00:43:37.932052 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.59s 2026-02-27 00:43:37.932069 | orchestrator | Reload udev rules ------------------------------------------------------- 0.58s 2026-02-27 00:43:37.932088 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.55s 2026-02-27 00:43:37.932107 | orchestrator | Remove all rook related logical devices --------------------------------- 0.33s 2026-02-27 00:43:37.932127 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.24s 2026-02-27 00:43:50.388760 | orchestrator | 2026-02-27 00:43:50 | INFO  | Task 7f901458-dfb4-4969-a464-6d6dacaf35be (facts) was prepared for execution. 2026-02-27 00:43:50.388863 | orchestrator | 2026-02-27 00:43:50 | INFO  | It takes a moment until task 7f901458-dfb4-4969-a464-6d6dacaf35be (facts) has been started and output is visible here. 2026-02-27 00:44:02.742350 | orchestrator | 2026-02-27 00:44:02.742450 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-27 00:44:02.742464 | orchestrator | 2026-02-27 00:44:02.742475 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-27 00:44:02.742486 | orchestrator | Friday 27 February 2026 00:43:54 +0000 (0:00:00.287) 0:00:00.288 ******* 2026-02-27 00:44:02.742496 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:44:02.742507 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:44:02.742517 | orchestrator | ok: [testbed-manager] 2026-02-27 00:44:02.742527 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:44:02.742554 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:44:02.742564 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:44:02.742574 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:44:02.742584 | orchestrator | 2026-02-27 00:44:02.742593 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-27 00:44:02.742603 | orchestrator | Friday 27 February 2026 00:43:55 +0000 (0:00:01.144) 0:00:01.432 ******* 2026-02-27 00:44:02.742613 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:44:02.742624 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:44:02.742634 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:44:02.742644 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:44:02.742653 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:44:02.742663 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:44:02.742712 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:44:02.742722 | orchestrator | 2026-02-27 00:44:02.742732 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-27 00:44:02.742742 | orchestrator | 2026-02-27 00:44:02.742758 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-27 00:44:02.742768 | orchestrator | Friday 27 February 2026 00:43:57 +0000 (0:00:01.332) 0:00:02.765 ******* 2026-02-27 00:44:02.742778 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:44:02.742788 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:44:02.742797 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:44:02.742808 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:44:02.742817 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:44:02.742827 | orchestrator | ok: [testbed-manager] 2026-02-27 00:44:02.742837 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:44:02.742846 | orchestrator | 2026-02-27 00:44:02.742856 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-27 00:44:02.742866 | orchestrator | 2026-02-27 00:44:02.742876 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-27 00:44:02.742885 | orchestrator | Friday 27 February 2026 00:44:01 +0000 (0:00:04.601) 0:00:07.367 ******* 2026-02-27 00:44:02.742895 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:44:02.742905 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:44:02.742918 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:44:02.742929 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:44:02.742940 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:44:02.742951 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:44:02.742962 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:44:02.742974 | orchestrator | 2026-02-27 00:44:02.742986 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 00:44:02.742997 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-27 00:44:02.743008 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-27 00:44:02.743018 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-27 00:44:02.743028 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-27 00:44:02.743038 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-27 00:44:02.743048 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-27 00:44:02.743057 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-27 00:44:02.743067 | orchestrator | 2026-02-27 00:44:02.743077 | orchestrator | 2026-02-27 00:44:02.743087 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 00:44:02.743102 | orchestrator | Friday 27 February 2026 00:44:02 +0000 (0:00:00.522) 0:00:07.889 ******* 2026-02-27 00:44:02.743112 | orchestrator | =============================================================================== 2026-02-27 00:44:02.743122 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.60s 2026-02-27 00:44:02.743135 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.33s 2026-02-27 00:44:02.743152 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.14s 2026-02-27 00:44:02.743168 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2026-02-27 00:44:05.230249 | orchestrator | 2026-02-27 00:44:05 | INFO  | Task 8b6f38b7-c658-4503-82b3-736ca1bda96e (ceph-configure-lvm-volumes) was prepared for execution. 2026-02-27 00:44:05.230371 | orchestrator | 2026-02-27 00:44:05 | INFO  | It takes a moment until task 8b6f38b7-c658-4503-82b3-736ca1bda96e (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-02-27 00:44:17.798923 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-27 00:44:17.799043 | orchestrator | 2.16.14 2026-02-27 00:44:17.799059 | orchestrator | 2026-02-27 00:44:17.799071 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-27 00:44:17.799084 | orchestrator | 2026-02-27 00:44:17.799122 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-27 00:44:17.799135 | orchestrator | Friday 27 February 2026 00:44:10 +0000 (0:00:00.489) 0:00:00.489 ******* 2026-02-27 00:44:17.799146 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-27 00:44:17.799158 | orchestrator | 2026-02-27 00:44:17.799169 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-27 00:44:17.799180 | orchestrator | Friday 27 February 2026 00:44:10 +0000 (0:00:00.267) 0:00:00.756 ******* 2026-02-27 00:44:17.799205 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:44:17.799217 | orchestrator | 2026-02-27 00:44:17.799228 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:44:17.799239 | orchestrator | Friday 27 February 2026 00:44:10 +0000 (0:00:00.232) 0:00:00.989 ******* 2026-02-27 00:44:17.799250 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-02-27 00:44:17.799272 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-02-27 00:44:17.799284 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-02-27 00:44:17.799305 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-02-27 00:44:17.799316 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-02-27 00:44:17.799327 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-02-27 00:44:17.799338 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-02-27 00:44:17.799349 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-02-27 00:44:17.799360 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-02-27 00:44:17.799371 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-02-27 00:44:17.799382 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-02-27 00:44:17.799393 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-02-27 00:44:17.799403 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-02-27 00:44:17.799414 | orchestrator | 2026-02-27 00:44:17.799425 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:44:17.799436 | orchestrator | Friday 27 February 2026 00:44:11 +0000 (0:00:00.485) 0:00:01.475 ******* 2026-02-27 00:44:17.799470 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:44:17.799483 | orchestrator | 2026-02-27 00:44:17.799496 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:44:17.799508 | orchestrator | Friday 27 February 2026 00:44:11 +0000 (0:00:00.201) 0:00:01.677 ******* 2026-02-27 00:44:17.799520 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:44:17.799532 | orchestrator | 2026-02-27 00:44:17.799543 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:44:17.799557 | orchestrator | Friday 27 February 2026 00:44:11 +0000 (0:00:00.196) 0:00:01.874 ******* 2026-02-27 00:44:17.799569 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:44:17.799582 | orchestrator | 2026-02-27 00:44:17.799595 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:44:17.799607 | orchestrator | Friday 27 February 2026 00:44:11 +0000 (0:00:00.235) 0:00:02.109 ******* 2026-02-27 00:44:17.799624 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:44:17.799637 | orchestrator | 2026-02-27 00:44:17.799649 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:44:17.799687 | orchestrator | Friday 27 February 2026 00:44:11 +0000 (0:00:00.216) 0:00:02.325 ******* 2026-02-27 00:44:17.799700 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:44:17.799712 | orchestrator | 2026-02-27 00:44:17.799724 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:44:17.799737 | orchestrator | Friday 27 February 2026 00:44:12 +0000 (0:00:00.201) 0:00:02.527 ******* 2026-02-27 00:44:17.799749 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:44:17.799761 | orchestrator | 2026-02-27 00:44:17.799773 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:44:17.799786 | orchestrator | Friday 27 February 2026 00:44:12 +0000 (0:00:00.200) 0:00:02.728 ******* 2026-02-27 00:44:17.799813 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:44:17.799824 | orchestrator | 2026-02-27 00:44:17.799835 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:44:17.799846 | orchestrator | Friday 27 February 2026 00:44:12 +0000 (0:00:00.238) 0:00:02.967 ******* 2026-02-27 00:44:17.799857 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:44:17.799868 | orchestrator | 2026-02-27 00:44:17.799879 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:44:17.799890 | orchestrator | Friday 27 February 2026 00:44:12 +0000 (0:00:00.203) 0:00:03.170 ******* 2026-02-27 00:44:17.799901 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3470a12e-124f-400f-8df7-ef48fe544e4b) 2026-02-27 00:44:17.799913 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3470a12e-124f-400f-8df7-ef48fe544e4b) 2026-02-27 00:44:17.799924 | orchestrator | 2026-02-27 00:44:17.799935 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:44:17.799964 | orchestrator | Friday 27 February 2026 00:44:13 +0000 (0:00:00.437) 0:00:03.607 ******* 2026-02-27 00:44:17.799976 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c4916fb9-2e52-4262-9b09-55f9a233c222) 2026-02-27 00:44:17.799993 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c4916fb9-2e52-4262-9b09-55f9a233c222) 2026-02-27 00:44:17.800005 | orchestrator | 2026-02-27 00:44:17.800016 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:44:17.800027 | orchestrator | Friday 27 February 2026 00:44:13 +0000 (0:00:00.639) 0:00:04.248 ******* 2026-02-27 00:44:17.800037 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_31dfd5e5-18cf-471e-b1c7-8ca54ae9145c) 2026-02-27 00:44:17.800048 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_31dfd5e5-18cf-471e-b1c7-8ca54ae9145c) 2026-02-27 00:44:17.800059 | orchestrator | 2026-02-27 00:44:17.800070 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:44:17.800081 | orchestrator | Friday 27 February 2026 00:44:14 +0000 (0:00:00.715) 0:00:04.963 ******* 2026-02-27 00:44:17.800100 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7c486bab-939d-4b28-a8a9-5aea680a535b) 2026-02-27 00:44:17.800111 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7c486bab-939d-4b28-a8a9-5aea680a535b) 2026-02-27 00:44:17.800122 | orchestrator | 2026-02-27 00:44:17.800133 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:44:17.800144 | orchestrator | Friday 27 February 2026 00:44:15 +0000 (0:00:00.938) 0:00:05.902 ******* 2026-02-27 00:44:17.800155 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-27 00:44:17.800166 | orchestrator | 2026-02-27 00:44:17.800178 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:44:17.800197 | orchestrator | Friday 27 February 2026 00:44:15 +0000 (0:00:00.355) 0:00:06.257 ******* 2026-02-27 00:44:17.800217 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-02-27 00:44:17.800236 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-02-27 00:44:17.800254 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-02-27 00:44:17.800273 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-02-27 00:44:17.800293 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-02-27 00:44:17.800312 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-02-27 00:44:17.800332 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-02-27 00:44:17.800344 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-02-27 00:44:17.800355 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-02-27 00:44:17.800366 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-02-27 00:44:17.800376 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-02-27 00:44:17.800387 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-02-27 00:44:17.800398 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-02-27 00:44:17.800408 | orchestrator | 2026-02-27 00:44:17.800419 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:44:17.800430 | orchestrator | Friday 27 February 2026 00:44:16 +0000 (0:00:00.390) 0:00:06.647 ******* 2026-02-27 00:44:17.800441 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:44:17.800452 | orchestrator | 2026-02-27 00:44:17.800463 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:44:17.800474 | orchestrator | Friday 27 February 2026 00:44:16 +0000 (0:00:00.210) 0:00:06.857 ******* 2026-02-27 00:44:17.800484 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:44:17.800495 | orchestrator | 2026-02-27 00:44:17.800506 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:44:17.800517 | orchestrator | Friday 27 February 2026 00:44:16 +0000 (0:00:00.203) 0:00:07.060 ******* 2026-02-27 00:44:17.800528 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:44:17.800538 | orchestrator | 2026-02-27 00:44:17.800549 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:44:17.800560 | orchestrator | Friday 27 February 2026 00:44:16 +0000 (0:00:00.216) 0:00:07.277 ******* 2026-02-27 00:44:17.800571 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:44:17.800582 | orchestrator | 2026-02-27 00:44:17.800592 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:44:17.800603 | orchestrator | Friday 27 February 2026 00:44:17 +0000 (0:00:00.207) 0:00:07.484 ******* 2026-02-27 00:44:17.800622 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:44:17.800633 | orchestrator | 2026-02-27 00:44:17.800643 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:44:17.800655 | orchestrator | Friday 27 February 2026 00:44:17 +0000 (0:00:00.188) 0:00:07.673 ******* 2026-02-27 00:44:17.800685 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:44:17.800696 | orchestrator | 2026-02-27 00:44:17.800707 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:44:17.800718 | orchestrator | Friday 27 February 2026 00:44:17 +0000 (0:00:00.246) 0:00:07.919 ******* 2026-02-27 00:44:17.800729 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:44:17.800740 | orchestrator | 2026-02-27 00:44:17.800758 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:44:25.873205 | orchestrator | Friday 27 February 2026 00:44:17 +0000 (0:00:00.209) 0:00:08.128 ******* 2026-02-27 00:44:25.873384 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:44:25.873413 | orchestrator | 2026-02-27 00:44:25.873436 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:44:25.873449 | orchestrator | Friday 27 February 2026 00:44:18 +0000 (0:00:00.229) 0:00:08.358 ******* 2026-02-27 00:44:25.873461 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-02-27 00:44:25.873499 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-02-27 00:44:25.873512 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-02-27 00:44:25.873523 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-02-27 00:44:25.873533 | orchestrator | 2026-02-27 00:44:25.873545 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:44:25.873556 | orchestrator | Friday 27 February 2026 00:44:19 +0000 (0:00:01.201) 0:00:09.560 ******* 2026-02-27 00:44:25.873567 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:44:25.873578 | orchestrator | 2026-02-27 00:44:25.873590 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:44:25.873601 | orchestrator | Friday 27 February 2026 00:44:19 +0000 (0:00:00.214) 0:00:09.774 ******* 2026-02-27 00:44:25.873612 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:44:25.873623 | orchestrator | 2026-02-27 00:44:25.873634 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:44:25.873645 | orchestrator | Friday 27 February 2026 00:44:19 +0000 (0:00:00.215) 0:00:09.990 ******* 2026-02-27 00:44:25.873695 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:44:25.873710 | orchestrator | 2026-02-27 00:44:25.873722 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:44:25.873735 | orchestrator | Friday 27 February 2026 00:44:19 +0000 (0:00:00.221) 0:00:10.212 ******* 2026-02-27 00:44:25.873747 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:44:25.873759 | orchestrator | 2026-02-27 00:44:25.873772 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-27 00:44:25.873784 | orchestrator | Friday 27 February 2026 00:44:20 +0000 (0:00:00.229) 0:00:10.442 ******* 2026-02-27 00:44:25.873796 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-02-27 00:44:25.873809 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-02-27 00:44:25.873821 | orchestrator | 2026-02-27 00:44:25.873832 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-27 00:44:25.873845 | orchestrator | Friday 27 February 2026 00:44:20 +0000 (0:00:00.169) 0:00:10.611 ******* 2026-02-27 00:44:25.873857 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:44:25.873869 | orchestrator | 2026-02-27 00:44:25.873881 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-27 00:44:25.873894 | orchestrator | Friday 27 February 2026 00:44:20 +0000 (0:00:00.142) 0:00:10.754 ******* 2026-02-27 00:44:25.873914 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:44:25.873931 | orchestrator | 2026-02-27 00:44:25.873947 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-27 00:44:25.873964 | orchestrator | Friday 27 February 2026 00:44:20 +0000 (0:00:00.138) 0:00:10.892 ******* 2026-02-27 00:44:25.874092 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:44:25.874118 | orchestrator | 2026-02-27 00:44:25.874137 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-27 00:44:25.874149 | orchestrator | Friday 27 February 2026 00:44:20 +0000 (0:00:00.131) 0:00:11.024 ******* 2026-02-27 00:44:25.874160 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:44:25.874171 | orchestrator | 2026-02-27 00:44:25.874182 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-27 00:44:25.874193 | orchestrator | Friday 27 February 2026 00:44:20 +0000 (0:00:00.134) 0:00:11.158 ******* 2026-02-27 00:44:25.874205 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a'}}) 2026-02-27 00:44:25.874216 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '15e091ae-77f4-5dd5-92b2-2aa74778b61d'}}) 2026-02-27 00:44:25.874227 | orchestrator | 2026-02-27 00:44:25.874238 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-27 00:44:25.874249 | orchestrator | Friday 27 February 2026 00:44:20 +0000 (0:00:00.161) 0:00:11.320 ******* 2026-02-27 00:44:25.874261 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a'}})  2026-02-27 00:44:25.874282 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '15e091ae-77f4-5dd5-92b2-2aa74778b61d'}})  2026-02-27 00:44:25.874293 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:44:25.874304 | orchestrator | 2026-02-27 00:44:25.874315 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-27 00:44:25.874325 | orchestrator | Friday 27 February 2026 00:44:21 +0000 (0:00:00.142) 0:00:11.463 ******* 2026-02-27 00:44:25.874336 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a'}})  2026-02-27 00:44:25.874347 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '15e091ae-77f4-5dd5-92b2-2aa74778b61d'}})  2026-02-27 00:44:25.874358 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:44:25.874369 | orchestrator | 2026-02-27 00:44:25.874379 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-27 00:44:25.874390 | orchestrator | Friday 27 February 2026 00:44:21 +0000 (0:00:00.370) 0:00:11.833 ******* 2026-02-27 00:44:25.874400 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a'}})  2026-02-27 00:44:25.874436 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '15e091ae-77f4-5dd5-92b2-2aa74778b61d'}})  2026-02-27 00:44:25.874447 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:44:25.874458 | orchestrator | 2026-02-27 00:44:25.874468 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-27 00:44:25.874479 | orchestrator | Friday 27 February 2026 00:44:21 +0000 (0:00:00.159) 0:00:11.993 ******* 2026-02-27 00:44:25.874490 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:44:25.874500 | orchestrator | 2026-02-27 00:44:25.874511 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-27 00:44:25.874522 | orchestrator | Friday 27 February 2026 00:44:21 +0000 (0:00:00.143) 0:00:12.136 ******* 2026-02-27 00:44:25.874539 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:44:25.874557 | orchestrator | 2026-02-27 00:44:25.874573 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-27 00:44:25.874590 | orchestrator | Friday 27 February 2026 00:44:21 +0000 (0:00:00.143) 0:00:12.279 ******* 2026-02-27 00:44:25.874608 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:44:25.874625 | orchestrator | 2026-02-27 00:44:25.874642 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-27 00:44:25.874686 | orchestrator | Friday 27 February 2026 00:44:22 +0000 (0:00:00.141) 0:00:12.421 ******* 2026-02-27 00:44:25.874723 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:44:25.874744 | orchestrator | 2026-02-27 00:44:25.874761 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-27 00:44:25.874781 | orchestrator | Friday 27 February 2026 00:44:22 +0000 (0:00:00.139) 0:00:12.560 ******* 2026-02-27 00:44:25.874800 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:44:25.874818 | orchestrator | 2026-02-27 00:44:25.874837 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-27 00:44:25.874855 | orchestrator | Friday 27 February 2026 00:44:22 +0000 (0:00:00.137) 0:00:12.697 ******* 2026-02-27 00:44:25.874875 | orchestrator | ok: [testbed-node-3] => { 2026-02-27 00:44:25.874895 | orchestrator |  "ceph_osd_devices": { 2026-02-27 00:44:25.874912 | orchestrator |  "sdb": { 2026-02-27 00:44:25.874932 | orchestrator |  "osd_lvm_uuid": "c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a" 2026-02-27 00:44:25.874951 | orchestrator |  }, 2026-02-27 00:44:25.874968 | orchestrator |  "sdc": { 2026-02-27 00:44:25.874987 | orchestrator |  "osd_lvm_uuid": "15e091ae-77f4-5dd5-92b2-2aa74778b61d" 2026-02-27 00:44:25.875006 | orchestrator |  } 2026-02-27 00:44:25.875024 | orchestrator |  } 2026-02-27 00:44:25.875043 | orchestrator | } 2026-02-27 00:44:25.875063 | orchestrator | 2026-02-27 00:44:25.875081 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-27 00:44:25.875112 | orchestrator | Friday 27 February 2026 00:44:22 +0000 (0:00:00.144) 0:00:12.842 ******* 2026-02-27 00:44:25.875132 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:44:25.875151 | orchestrator | 2026-02-27 00:44:25.875169 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-27 00:44:25.875187 | orchestrator | Friday 27 February 2026 00:44:22 +0000 (0:00:00.139) 0:00:12.982 ******* 2026-02-27 00:44:25.875206 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:44:25.875224 | orchestrator | 2026-02-27 00:44:25.875242 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-27 00:44:25.875261 | orchestrator | Friday 27 February 2026 00:44:22 +0000 (0:00:00.122) 0:00:13.105 ******* 2026-02-27 00:44:25.875280 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:44:25.875299 | orchestrator | 2026-02-27 00:44:25.875317 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-27 00:44:25.875335 | orchestrator | Friday 27 February 2026 00:44:22 +0000 (0:00:00.145) 0:00:13.250 ******* 2026-02-27 00:44:25.875354 | orchestrator | changed: [testbed-node-3] => { 2026-02-27 00:44:25.875372 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-27 00:44:25.875389 | orchestrator |  "ceph_osd_devices": { 2026-02-27 00:44:25.875409 | orchestrator |  "sdb": { 2026-02-27 00:44:25.875427 | orchestrator |  "osd_lvm_uuid": "c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a" 2026-02-27 00:44:25.875445 | orchestrator |  }, 2026-02-27 00:44:25.875464 | orchestrator |  "sdc": { 2026-02-27 00:44:25.875483 | orchestrator |  "osd_lvm_uuid": "15e091ae-77f4-5dd5-92b2-2aa74778b61d" 2026-02-27 00:44:25.875501 | orchestrator |  } 2026-02-27 00:44:25.875520 | orchestrator |  }, 2026-02-27 00:44:25.875539 | orchestrator |  "lvm_volumes": [ 2026-02-27 00:44:25.875557 | orchestrator |  { 2026-02-27 00:44:25.875573 | orchestrator |  "data": "osd-block-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a", 2026-02-27 00:44:25.875584 | orchestrator |  "data_vg": "ceph-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a" 2026-02-27 00:44:25.875595 | orchestrator |  }, 2026-02-27 00:44:25.875605 | orchestrator |  { 2026-02-27 00:44:25.875616 | orchestrator |  "data": "osd-block-15e091ae-77f4-5dd5-92b2-2aa74778b61d", 2026-02-27 00:44:25.875627 | orchestrator |  "data_vg": "ceph-15e091ae-77f4-5dd5-92b2-2aa74778b61d" 2026-02-27 00:44:25.875637 | orchestrator |  } 2026-02-27 00:44:25.875648 | orchestrator |  ] 2026-02-27 00:44:25.875679 | orchestrator |  } 2026-02-27 00:44:25.875690 | orchestrator | } 2026-02-27 00:44:25.875713 | orchestrator | 2026-02-27 00:44:25.875723 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-27 00:44:25.875734 | orchestrator | Friday 27 February 2026 00:44:23 +0000 (0:00:00.423) 0:00:13.673 ******* 2026-02-27 00:44:25.875745 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-27 00:44:25.875756 | orchestrator | 2026-02-27 00:44:25.875767 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-27 00:44:25.875777 | orchestrator | 2026-02-27 00:44:25.875788 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-27 00:44:25.875799 | orchestrator | Friday 27 February 2026 00:44:25 +0000 (0:00:01.944) 0:00:15.617 ******* 2026-02-27 00:44:25.875809 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-27 00:44:25.875820 | orchestrator | 2026-02-27 00:44:25.875831 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-27 00:44:25.875842 | orchestrator | Friday 27 February 2026 00:44:25 +0000 (0:00:00.262) 0:00:15.880 ******* 2026-02-27 00:44:25.875853 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:44:25.875863 | orchestrator | 2026-02-27 00:44:25.875887 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:44:34.400883 | orchestrator | Friday 27 February 2026 00:44:25 +0000 (0:00:00.324) 0:00:16.205 ******* 2026-02-27 00:44:34.400999 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-02-27 00:44:34.401015 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-02-27 00:44:34.401025 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-02-27 00:44:34.401035 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-02-27 00:44:34.401045 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-02-27 00:44:34.401055 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-02-27 00:44:34.401065 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-02-27 00:44:34.401094 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-02-27 00:44:34.401105 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-02-27 00:44:34.401115 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-02-27 00:44:34.401124 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-02-27 00:44:34.401134 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-02-27 00:44:34.401148 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-02-27 00:44:34.401159 | orchestrator | 2026-02-27 00:44:34.401171 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:44:34.401181 | orchestrator | Friday 27 February 2026 00:44:26 +0000 (0:00:00.354) 0:00:16.559 ******* 2026-02-27 00:44:34.401191 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:44:34.401201 | orchestrator | 2026-02-27 00:44:34.401211 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:44:34.401221 | orchestrator | Friday 27 February 2026 00:44:26 +0000 (0:00:00.195) 0:00:16.754 ******* 2026-02-27 00:44:34.401231 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:44:34.401240 | orchestrator | 2026-02-27 00:44:34.401250 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:44:34.401259 | orchestrator | Friday 27 February 2026 00:44:26 +0000 (0:00:00.144) 0:00:16.899 ******* 2026-02-27 00:44:34.401269 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:44:34.401278 | orchestrator | 2026-02-27 00:44:34.401288 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:44:34.401298 | orchestrator | Friday 27 February 2026 00:44:26 +0000 (0:00:00.135) 0:00:17.034 ******* 2026-02-27 00:44:34.401328 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:44:34.401338 | orchestrator | 2026-02-27 00:44:34.401347 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:44:34.401357 | orchestrator | Friday 27 February 2026 00:44:26 +0000 (0:00:00.158) 0:00:17.193 ******* 2026-02-27 00:44:34.401366 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:44:34.401376 | orchestrator | 2026-02-27 00:44:34.401385 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:44:34.401395 | orchestrator | Friday 27 February 2026 00:44:27 +0000 (0:00:00.535) 0:00:17.728 ******* 2026-02-27 00:44:34.401406 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:44:34.401417 | orchestrator | 2026-02-27 00:44:34.401427 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:44:34.401438 | orchestrator | Friday 27 February 2026 00:44:27 +0000 (0:00:00.188) 0:00:17.917 ******* 2026-02-27 00:44:34.401449 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:44:34.401460 | orchestrator | 2026-02-27 00:44:34.401471 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:44:34.401482 | orchestrator | Friday 27 February 2026 00:44:27 +0000 (0:00:00.207) 0:00:18.125 ******* 2026-02-27 00:44:34.401492 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:44:34.401503 | orchestrator | 2026-02-27 00:44:34.401514 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:44:34.401525 | orchestrator | Friday 27 February 2026 00:44:27 +0000 (0:00:00.192) 0:00:18.317 ******* 2026-02-27 00:44:34.401535 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d07f98ad-3d62-49f5-84e9-af5adb521297) 2026-02-27 00:44:34.401547 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d07f98ad-3d62-49f5-84e9-af5adb521297) 2026-02-27 00:44:34.401558 | orchestrator | 2026-02-27 00:44:34.401569 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:44:34.401580 | orchestrator | Friday 27 February 2026 00:44:28 +0000 (0:00:00.394) 0:00:18.711 ******* 2026-02-27 00:44:34.401591 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a71caac6-92e2-45f9-9373-56e68f91355d) 2026-02-27 00:44:34.401602 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a71caac6-92e2-45f9-9373-56e68f91355d) 2026-02-27 00:44:34.401613 | orchestrator | 2026-02-27 00:44:34.401623 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:44:34.401633 | orchestrator | Friday 27 February 2026 00:44:28 +0000 (0:00:00.489) 0:00:19.200 ******* 2026-02-27 00:44:34.401704 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e3da6966-e430-4abd-922c-0deb6c0107da) 2026-02-27 00:44:34.401717 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e3da6966-e430-4abd-922c-0deb6c0107da) 2026-02-27 00:44:34.401728 | orchestrator | 2026-02-27 00:44:34.401739 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:44:34.401770 | orchestrator | Friday 27 February 2026 00:44:29 +0000 (0:00:00.462) 0:00:19.662 ******* 2026-02-27 00:44:34.401781 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_94dd7bd0-cf74-4f65-8a31-220357cecc47) 2026-02-27 00:44:34.401792 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_94dd7bd0-cf74-4f65-8a31-220357cecc47) 2026-02-27 00:44:34.401802 | orchestrator | 2026-02-27 00:44:34.401819 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:44:34.401830 | orchestrator | Friday 27 February 2026 00:44:29 +0000 (0:00:00.471) 0:00:20.135 ******* 2026-02-27 00:44:34.401840 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-27 00:44:34.401850 | orchestrator | 2026-02-27 00:44:34.401859 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:44:34.401869 | orchestrator | Friday 27 February 2026 00:44:30 +0000 (0:00:00.350) 0:00:20.485 ******* 2026-02-27 00:44:34.401879 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-02-27 00:44:34.401898 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-02-27 00:44:34.401908 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-02-27 00:44:34.401917 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-02-27 00:44:34.401927 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-02-27 00:44:34.401937 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-02-27 00:44:34.401946 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-02-27 00:44:34.401956 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-02-27 00:44:34.401965 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-02-27 00:44:34.401975 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-02-27 00:44:34.401984 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-02-27 00:44:34.401994 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-02-27 00:44:34.402004 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-02-27 00:44:34.402013 | orchestrator | 2026-02-27 00:44:34.402082 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:44:34.402093 | orchestrator | Friday 27 February 2026 00:44:30 +0000 (0:00:00.457) 0:00:20.942 ******* 2026-02-27 00:44:34.402103 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:44:34.402112 | orchestrator | 2026-02-27 00:44:34.402122 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:44:34.402131 | orchestrator | Friday 27 February 2026 00:44:31 +0000 (0:00:00.789) 0:00:21.732 ******* 2026-02-27 00:44:34.402141 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:44:34.402151 | orchestrator | 2026-02-27 00:44:34.402160 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:44:34.402170 | orchestrator | Friday 27 February 2026 00:44:31 +0000 (0:00:00.265) 0:00:21.997 ******* 2026-02-27 00:44:34.402180 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:44:34.402189 | orchestrator | 2026-02-27 00:44:34.402199 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:44:34.402209 | orchestrator | Friday 27 February 2026 00:44:31 +0000 (0:00:00.251) 0:00:22.249 ******* 2026-02-27 00:44:34.402218 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:44:34.402228 | orchestrator | 2026-02-27 00:44:34.402238 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:44:34.402247 | orchestrator | Friday 27 February 2026 00:44:32 +0000 (0:00:00.274) 0:00:22.524 ******* 2026-02-27 00:44:34.402257 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:44:34.402266 | orchestrator | 2026-02-27 00:44:34.402276 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:44:34.402286 | orchestrator | Friday 27 February 2026 00:44:32 +0000 (0:00:00.207) 0:00:22.731 ******* 2026-02-27 00:44:34.402295 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:44:34.402305 | orchestrator | 2026-02-27 00:44:34.402314 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:44:34.402324 | orchestrator | Friday 27 February 2026 00:44:32 +0000 (0:00:00.249) 0:00:22.981 ******* 2026-02-27 00:44:34.402333 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:44:34.402343 | orchestrator | 2026-02-27 00:44:34.402352 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:44:34.402362 | orchestrator | Friday 27 February 2026 00:44:32 +0000 (0:00:00.246) 0:00:23.228 ******* 2026-02-27 00:44:34.402372 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:44:34.402389 | orchestrator | 2026-02-27 00:44:34.402399 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:44:34.402408 | orchestrator | Friday 27 February 2026 00:44:33 +0000 (0:00:00.253) 0:00:23.481 ******* 2026-02-27 00:44:34.402418 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-02-27 00:44:34.402428 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-02-27 00:44:34.402438 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-02-27 00:44:34.402448 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-02-27 00:44:34.402457 | orchestrator | 2026-02-27 00:44:34.402467 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:44:34.402477 | orchestrator | Friday 27 February 2026 00:44:34 +0000 (0:00:01.005) 0:00:24.487 ******* 2026-02-27 00:44:34.402486 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:44:41.838317 | orchestrator | 2026-02-27 00:44:41.838413 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:44:41.838424 | orchestrator | Friday 27 February 2026 00:44:34 +0000 (0:00:00.250) 0:00:24.738 ******* 2026-02-27 00:44:41.838431 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:44:41.838440 | orchestrator | 2026-02-27 00:44:41.838446 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:44:41.838470 | orchestrator | Friday 27 February 2026 00:44:34 +0000 (0:00:00.216) 0:00:24.954 ******* 2026-02-27 00:44:41.838477 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:44:41.838483 | orchestrator | 2026-02-27 00:44:41.838489 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:44:41.838496 | orchestrator | Friday 27 February 2026 00:44:34 +0000 (0:00:00.206) 0:00:25.161 ******* 2026-02-27 00:44:41.838502 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:44:41.838508 | orchestrator | 2026-02-27 00:44:41.838514 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-27 00:44:41.838520 | orchestrator | Friday 27 February 2026 00:44:35 +0000 (0:00:00.797) 0:00:25.959 ******* 2026-02-27 00:44:41.838527 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-02-27 00:44:41.838533 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-02-27 00:44:41.838539 | orchestrator | 2026-02-27 00:44:41.838545 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-27 00:44:41.838552 | orchestrator | Friday 27 February 2026 00:44:35 +0000 (0:00:00.165) 0:00:26.124 ******* 2026-02-27 00:44:41.838558 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:44:41.838564 | orchestrator | 2026-02-27 00:44:41.838570 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-27 00:44:41.838576 | orchestrator | Friday 27 February 2026 00:44:35 +0000 (0:00:00.128) 0:00:26.253 ******* 2026-02-27 00:44:41.838583 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:44:41.838589 | orchestrator | 2026-02-27 00:44:41.838595 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-27 00:44:41.838601 | orchestrator | Friday 27 February 2026 00:44:36 +0000 (0:00:00.125) 0:00:26.378 ******* 2026-02-27 00:44:41.838607 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:44:41.838613 | orchestrator | 2026-02-27 00:44:41.838619 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-27 00:44:41.838625 | orchestrator | Friday 27 February 2026 00:44:36 +0000 (0:00:00.131) 0:00:26.509 ******* 2026-02-27 00:44:41.838631 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:44:41.838689 | orchestrator | 2026-02-27 00:44:41.838695 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-27 00:44:41.838701 | orchestrator | Friday 27 February 2026 00:44:36 +0000 (0:00:00.182) 0:00:26.691 ******* 2026-02-27 00:44:41.838709 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'aa250c28-8715-5ad9-8f6a-4b8a4568e8d3'}}) 2026-02-27 00:44:41.838715 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '91c1f24e-fd77-555b-b1fb-5152ae0ce974'}}) 2026-02-27 00:44:41.838742 | orchestrator | 2026-02-27 00:44:41.838748 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-27 00:44:41.838754 | orchestrator | Friday 27 February 2026 00:44:36 +0000 (0:00:00.185) 0:00:26.877 ******* 2026-02-27 00:44:41.838761 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'aa250c28-8715-5ad9-8f6a-4b8a4568e8d3'}})  2026-02-27 00:44:41.838769 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '91c1f24e-fd77-555b-b1fb-5152ae0ce974'}})  2026-02-27 00:44:41.838775 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:44:41.838780 | orchestrator | 2026-02-27 00:44:41.838785 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-27 00:44:41.838791 | orchestrator | Friday 27 February 2026 00:44:36 +0000 (0:00:00.200) 0:00:27.078 ******* 2026-02-27 00:44:41.838796 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'aa250c28-8715-5ad9-8f6a-4b8a4568e8d3'}})  2026-02-27 00:44:41.838802 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '91c1f24e-fd77-555b-b1fb-5152ae0ce974'}})  2026-02-27 00:44:41.838808 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:44:41.838814 | orchestrator | 2026-02-27 00:44:41.838820 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-27 00:44:41.838826 | orchestrator | Friday 27 February 2026 00:44:36 +0000 (0:00:00.167) 0:00:27.246 ******* 2026-02-27 00:44:41.838831 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'aa250c28-8715-5ad9-8f6a-4b8a4568e8d3'}})  2026-02-27 00:44:41.838838 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '91c1f24e-fd77-555b-b1fb-5152ae0ce974'}})  2026-02-27 00:44:41.838844 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:44:41.838850 | orchestrator | 2026-02-27 00:44:41.838855 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-27 00:44:41.838861 | orchestrator | Friday 27 February 2026 00:44:37 +0000 (0:00:00.230) 0:00:27.476 ******* 2026-02-27 00:44:41.838866 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:44:41.838871 | orchestrator | 2026-02-27 00:44:41.838877 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-27 00:44:41.838883 | orchestrator | Friday 27 February 2026 00:44:37 +0000 (0:00:00.203) 0:00:27.680 ******* 2026-02-27 00:44:41.838888 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:44:41.838894 | orchestrator | 2026-02-27 00:44:41.838900 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-27 00:44:41.838905 | orchestrator | Friday 27 February 2026 00:44:37 +0000 (0:00:00.177) 0:00:27.857 ******* 2026-02-27 00:44:41.838927 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:44:41.838934 | orchestrator | 2026-02-27 00:44:41.838939 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-27 00:44:41.838945 | orchestrator | Friday 27 February 2026 00:44:37 +0000 (0:00:00.439) 0:00:28.297 ******* 2026-02-27 00:44:41.838951 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:44:41.838956 | orchestrator | 2026-02-27 00:44:41.838962 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-27 00:44:41.838968 | orchestrator | Friday 27 February 2026 00:44:38 +0000 (0:00:00.153) 0:00:28.450 ******* 2026-02-27 00:44:41.838973 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:44:41.838979 | orchestrator | 2026-02-27 00:44:41.838984 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-27 00:44:41.838990 | orchestrator | Friday 27 February 2026 00:44:38 +0000 (0:00:00.139) 0:00:28.590 ******* 2026-02-27 00:44:41.838996 | orchestrator | ok: [testbed-node-4] => { 2026-02-27 00:44:41.839001 | orchestrator |  "ceph_osd_devices": { 2026-02-27 00:44:41.839007 | orchestrator |  "sdb": { 2026-02-27 00:44:41.839014 | orchestrator |  "osd_lvm_uuid": "aa250c28-8715-5ad9-8f6a-4b8a4568e8d3" 2026-02-27 00:44:41.839020 | orchestrator |  }, 2026-02-27 00:44:41.839035 | orchestrator |  "sdc": { 2026-02-27 00:44:41.839049 | orchestrator |  "osd_lvm_uuid": "91c1f24e-fd77-555b-b1fb-5152ae0ce974" 2026-02-27 00:44:41.839056 | orchestrator |  } 2026-02-27 00:44:41.839063 | orchestrator |  } 2026-02-27 00:44:41.839069 | orchestrator | } 2026-02-27 00:44:41.839075 | orchestrator | 2026-02-27 00:44:41.839082 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-27 00:44:41.839088 | orchestrator | Friday 27 February 2026 00:44:38 +0000 (0:00:00.178) 0:00:28.768 ******* 2026-02-27 00:44:41.839093 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:44:41.839099 | orchestrator | 2026-02-27 00:44:41.839106 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-27 00:44:41.839112 | orchestrator | Friday 27 February 2026 00:44:38 +0000 (0:00:00.151) 0:00:28.920 ******* 2026-02-27 00:44:41.839118 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:44:41.839125 | orchestrator | 2026-02-27 00:44:41.839132 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-27 00:44:41.839138 | orchestrator | Friday 27 February 2026 00:44:38 +0000 (0:00:00.163) 0:00:29.084 ******* 2026-02-27 00:44:41.839145 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:44:41.839151 | orchestrator | 2026-02-27 00:44:41.839158 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-27 00:44:41.839165 | orchestrator | Friday 27 February 2026 00:44:38 +0000 (0:00:00.130) 0:00:29.215 ******* 2026-02-27 00:44:41.839172 | orchestrator | changed: [testbed-node-4] => { 2026-02-27 00:44:41.839178 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-27 00:44:41.839185 | orchestrator |  "ceph_osd_devices": { 2026-02-27 00:44:41.839193 | orchestrator |  "sdb": { 2026-02-27 00:44:41.839202 | orchestrator |  "osd_lvm_uuid": "aa250c28-8715-5ad9-8f6a-4b8a4568e8d3" 2026-02-27 00:44:41.839208 | orchestrator |  }, 2026-02-27 00:44:41.839214 | orchestrator |  "sdc": { 2026-02-27 00:44:41.839220 | orchestrator |  "osd_lvm_uuid": "91c1f24e-fd77-555b-b1fb-5152ae0ce974" 2026-02-27 00:44:41.839226 | orchestrator |  } 2026-02-27 00:44:41.839232 | orchestrator |  }, 2026-02-27 00:44:41.839238 | orchestrator |  "lvm_volumes": [ 2026-02-27 00:44:41.839245 | orchestrator |  { 2026-02-27 00:44:41.839252 | orchestrator |  "data": "osd-block-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3", 2026-02-27 00:44:41.839258 | orchestrator |  "data_vg": "ceph-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3" 2026-02-27 00:44:41.839264 | orchestrator |  }, 2026-02-27 00:44:41.839271 | orchestrator |  { 2026-02-27 00:44:41.839278 | orchestrator |  "data": "osd-block-91c1f24e-fd77-555b-b1fb-5152ae0ce974", 2026-02-27 00:44:41.839284 | orchestrator |  "data_vg": "ceph-91c1f24e-fd77-555b-b1fb-5152ae0ce974" 2026-02-27 00:44:41.839290 | orchestrator |  } 2026-02-27 00:44:41.839296 | orchestrator |  ] 2026-02-27 00:44:41.839304 | orchestrator |  } 2026-02-27 00:44:41.839310 | orchestrator | } 2026-02-27 00:44:41.839316 | orchestrator | 2026-02-27 00:44:41.839323 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-27 00:44:41.839329 | orchestrator | Friday 27 February 2026 00:44:39 +0000 (0:00:00.233) 0:00:29.448 ******* 2026-02-27 00:44:41.839335 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-27 00:44:41.839341 | orchestrator | 2026-02-27 00:44:41.839346 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-27 00:44:41.839352 | orchestrator | 2026-02-27 00:44:41.839359 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-27 00:44:41.839365 | orchestrator | Friday 27 February 2026 00:44:40 +0000 (0:00:01.289) 0:00:30.738 ******* 2026-02-27 00:44:41.839370 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-27 00:44:41.839376 | orchestrator | 2026-02-27 00:44:41.839382 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-27 00:44:41.839395 | orchestrator | Friday 27 February 2026 00:44:41 +0000 (0:00:00.752) 0:00:31.490 ******* 2026-02-27 00:44:41.839402 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:44:41.839409 | orchestrator | 2026-02-27 00:44:41.839416 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:44:41.839423 | orchestrator | Friday 27 February 2026 00:44:41 +0000 (0:00:00.259) 0:00:31.749 ******* 2026-02-27 00:44:41.839429 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-02-27 00:44:41.839435 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-02-27 00:44:41.839442 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-02-27 00:44:41.839447 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-02-27 00:44:41.839454 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-02-27 00:44:41.839468 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-02-27 00:44:51.063184 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-02-27 00:44:51.063303 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-02-27 00:44:51.063322 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-02-27 00:44:51.063335 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-02-27 00:44:51.063349 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-02-27 00:44:51.063362 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-02-27 00:44:51.063377 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-02-27 00:44:51.063391 | orchestrator | 2026-02-27 00:44:51.063406 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:44:51.063418 | orchestrator | Friday 27 February 2026 00:44:41 +0000 (0:00:00.422) 0:00:32.171 ******* 2026-02-27 00:44:51.063430 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:44:51.063444 | orchestrator | 2026-02-27 00:44:51.063457 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:44:51.063471 | orchestrator | Friday 27 February 2026 00:44:42 +0000 (0:00:00.182) 0:00:32.354 ******* 2026-02-27 00:44:51.063483 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:44:51.063496 | orchestrator | 2026-02-27 00:44:51.063508 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:44:51.063522 | orchestrator | Friday 27 February 2026 00:44:42 +0000 (0:00:00.220) 0:00:32.574 ******* 2026-02-27 00:44:51.063536 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:44:51.063549 | orchestrator | 2026-02-27 00:44:51.063562 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:44:51.063576 | orchestrator | Friday 27 February 2026 00:44:42 +0000 (0:00:00.206) 0:00:32.781 ******* 2026-02-27 00:44:51.063589 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:44:51.063619 | orchestrator | 2026-02-27 00:44:51.063676 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:44:51.063687 | orchestrator | Friday 27 February 2026 00:44:42 +0000 (0:00:00.199) 0:00:32.981 ******* 2026-02-27 00:44:51.063695 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:44:51.063703 | orchestrator | 2026-02-27 00:44:51.063712 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:44:51.063721 | orchestrator | Friday 27 February 2026 00:44:42 +0000 (0:00:00.222) 0:00:33.204 ******* 2026-02-27 00:44:51.063731 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:44:51.063740 | orchestrator | 2026-02-27 00:44:51.063770 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:44:51.063781 | orchestrator | Friday 27 February 2026 00:44:43 +0000 (0:00:00.261) 0:00:33.466 ******* 2026-02-27 00:44:51.063822 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:44:51.063831 | orchestrator | 2026-02-27 00:44:51.063839 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:44:51.063847 | orchestrator | Friday 27 February 2026 00:44:43 +0000 (0:00:00.219) 0:00:33.685 ******* 2026-02-27 00:44:51.063855 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:44:51.063863 | orchestrator | 2026-02-27 00:44:51.063870 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:44:51.063879 | orchestrator | Friday 27 February 2026 00:44:43 +0000 (0:00:00.287) 0:00:33.973 ******* 2026-02-27 00:44:51.063887 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_7b66f543-9fce-4c0f-ad03-37f043f64686) 2026-02-27 00:44:51.063900 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_7b66f543-9fce-4c0f-ad03-37f043f64686) 2026-02-27 00:44:51.063912 | orchestrator | 2026-02-27 00:44:51.063920 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:44:51.063928 | orchestrator | Friday 27 February 2026 00:44:44 +0000 (0:00:00.996) 0:00:34.969 ******* 2026-02-27 00:44:51.063936 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_7eee5dc0-08e1-454c-92c3-6b2c2994eeca) 2026-02-27 00:44:51.063943 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_7eee5dc0-08e1-454c-92c3-6b2c2994eeca) 2026-02-27 00:44:51.063951 | orchestrator | 2026-02-27 00:44:51.063958 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:44:51.063966 | orchestrator | Friday 27 February 2026 00:44:45 +0000 (0:00:00.492) 0:00:35.462 ******* 2026-02-27 00:44:51.063974 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_684e370a-eec5-4526-b882-46c5ae49497d) 2026-02-27 00:44:51.063982 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_684e370a-eec5-4526-b882-46c5ae49497d) 2026-02-27 00:44:51.063989 | orchestrator | 2026-02-27 00:44:51.064007 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:44:51.064015 | orchestrator | Friday 27 February 2026 00:44:45 +0000 (0:00:00.484) 0:00:35.946 ******* 2026-02-27 00:44:51.064022 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_109976ce-0a0b-48dc-bf94-df447195f5f3) 2026-02-27 00:44:51.064030 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_109976ce-0a0b-48dc-bf94-df447195f5f3) 2026-02-27 00:44:51.064038 | orchestrator | 2026-02-27 00:44:51.064046 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:44:51.064053 | orchestrator | Friday 27 February 2026 00:44:46 +0000 (0:00:00.429) 0:00:36.375 ******* 2026-02-27 00:44:51.064061 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-27 00:44:51.064069 | orchestrator | 2026-02-27 00:44:51.064076 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:44:51.064101 | orchestrator | Friday 27 February 2026 00:44:46 +0000 (0:00:00.427) 0:00:36.803 ******* 2026-02-27 00:44:51.064110 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-02-27 00:44:51.064117 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-02-27 00:44:51.064128 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-02-27 00:44:51.064143 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-02-27 00:44:51.064152 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-02-27 00:44:51.064160 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-02-27 00:44:51.064167 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-02-27 00:44:51.064175 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-02-27 00:44:51.064191 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-02-27 00:44:51.064198 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-02-27 00:44:51.064206 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-02-27 00:44:51.064214 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-02-27 00:44:51.064222 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-02-27 00:44:51.064234 | orchestrator | 2026-02-27 00:44:51.064247 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:44:51.064259 | orchestrator | Friday 27 February 2026 00:44:47 +0000 (0:00:00.580) 0:00:37.383 ******* 2026-02-27 00:44:51.064271 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:44:51.064283 | orchestrator | 2026-02-27 00:44:51.064295 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:44:51.064307 | orchestrator | Friday 27 February 2026 00:44:47 +0000 (0:00:00.227) 0:00:37.611 ******* 2026-02-27 00:44:51.064317 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:44:51.064328 | orchestrator | 2026-02-27 00:44:51.064340 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:44:51.064353 | orchestrator | Friday 27 February 2026 00:44:47 +0000 (0:00:00.285) 0:00:37.897 ******* 2026-02-27 00:44:51.064365 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:44:51.064378 | orchestrator | 2026-02-27 00:44:51.064392 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:44:51.064405 | orchestrator | Friday 27 February 2026 00:44:47 +0000 (0:00:00.215) 0:00:38.112 ******* 2026-02-27 00:44:51.064418 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:44:51.064432 | orchestrator | 2026-02-27 00:44:51.064440 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:44:51.064448 | orchestrator | Friday 27 February 2026 00:44:48 +0000 (0:00:00.256) 0:00:38.369 ******* 2026-02-27 00:44:51.064456 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:44:51.064464 | orchestrator | 2026-02-27 00:44:51.064476 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:44:51.064487 | orchestrator | Friday 27 February 2026 00:44:48 +0000 (0:00:00.204) 0:00:38.573 ******* 2026-02-27 00:44:51.064497 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:44:51.064510 | orchestrator | 2026-02-27 00:44:51.064523 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:44:51.064535 | orchestrator | Friday 27 February 2026 00:44:49 +0000 (0:00:00.772) 0:00:39.346 ******* 2026-02-27 00:44:51.064548 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:44:51.064562 | orchestrator | 2026-02-27 00:44:51.064573 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:44:51.064581 | orchestrator | Friday 27 February 2026 00:44:49 +0000 (0:00:00.213) 0:00:39.559 ******* 2026-02-27 00:44:51.064589 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:44:51.064597 | orchestrator | 2026-02-27 00:44:51.064604 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:44:51.064612 | orchestrator | Friday 27 February 2026 00:44:49 +0000 (0:00:00.224) 0:00:39.784 ******* 2026-02-27 00:44:51.064620 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-02-27 00:44:51.064662 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-02-27 00:44:51.064678 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-02-27 00:44:51.064686 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-02-27 00:44:51.064694 | orchestrator | 2026-02-27 00:44:51.064702 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:44:51.064710 | orchestrator | Friday 27 February 2026 00:44:50 +0000 (0:00:00.723) 0:00:40.508 ******* 2026-02-27 00:44:51.064718 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:44:51.064725 | orchestrator | 2026-02-27 00:44:51.064741 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:44:51.064757 | orchestrator | Friday 27 February 2026 00:44:50 +0000 (0:00:00.227) 0:00:40.735 ******* 2026-02-27 00:44:51.064765 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:44:51.064773 | orchestrator | 2026-02-27 00:44:51.064781 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:44:51.064789 | orchestrator | Friday 27 February 2026 00:44:50 +0000 (0:00:00.223) 0:00:40.959 ******* 2026-02-27 00:44:51.064797 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:44:51.064805 | orchestrator | 2026-02-27 00:44:51.064813 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:44:51.064821 | orchestrator | Friday 27 February 2026 00:44:50 +0000 (0:00:00.222) 0:00:41.181 ******* 2026-02-27 00:44:51.064830 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:44:51.064844 | orchestrator | 2026-02-27 00:44:51.064863 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-27 00:44:55.570321 | orchestrator | Friday 27 February 2026 00:44:51 +0000 (0:00:00.216) 0:00:41.397 ******* 2026-02-27 00:44:55.570416 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-02-27 00:44:55.570427 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-02-27 00:44:55.570436 | orchestrator | 2026-02-27 00:44:55.570444 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-27 00:44:55.570452 | orchestrator | Friday 27 February 2026 00:44:51 +0000 (0:00:00.184) 0:00:41.582 ******* 2026-02-27 00:44:55.570459 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:44:55.570467 | orchestrator | 2026-02-27 00:44:55.570475 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-27 00:44:55.570482 | orchestrator | Friday 27 February 2026 00:44:51 +0000 (0:00:00.139) 0:00:41.721 ******* 2026-02-27 00:44:55.570489 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:44:55.570496 | orchestrator | 2026-02-27 00:44:55.570503 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-27 00:44:55.570510 | orchestrator | Friday 27 February 2026 00:44:51 +0000 (0:00:00.161) 0:00:41.882 ******* 2026-02-27 00:44:55.570517 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:44:55.570525 | orchestrator | 2026-02-27 00:44:55.570532 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-27 00:44:55.570539 | orchestrator | Friday 27 February 2026 00:44:51 +0000 (0:00:00.300) 0:00:42.183 ******* 2026-02-27 00:44:55.570546 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:44:55.570554 | orchestrator | 2026-02-27 00:44:55.570561 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-27 00:44:55.570569 | orchestrator | Friday 27 February 2026 00:44:51 +0000 (0:00:00.148) 0:00:42.332 ******* 2026-02-27 00:44:55.570577 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5630d52f-55a8-52f3-8c7d-90d730eab2c2'}}) 2026-02-27 00:44:55.570585 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e90026b5-6780-5a31-9cea-c7916e7559fe'}}) 2026-02-27 00:44:55.570592 | orchestrator | 2026-02-27 00:44:55.570599 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-27 00:44:55.570606 | orchestrator | Friday 27 February 2026 00:44:52 +0000 (0:00:00.184) 0:00:42.517 ******* 2026-02-27 00:44:55.570614 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5630d52f-55a8-52f3-8c7d-90d730eab2c2'}})  2026-02-27 00:44:55.570693 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e90026b5-6780-5a31-9cea-c7916e7559fe'}})  2026-02-27 00:44:55.570707 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:44:55.570715 | orchestrator | 2026-02-27 00:44:55.570724 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-27 00:44:55.570736 | orchestrator | Friday 27 February 2026 00:44:52 +0000 (0:00:00.157) 0:00:42.674 ******* 2026-02-27 00:44:55.570754 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5630d52f-55a8-52f3-8c7d-90d730eab2c2'}})  2026-02-27 00:44:55.570792 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e90026b5-6780-5a31-9cea-c7916e7559fe'}})  2026-02-27 00:44:55.570804 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:44:55.570815 | orchestrator | 2026-02-27 00:44:55.570825 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-27 00:44:55.570836 | orchestrator | Friday 27 February 2026 00:44:52 +0000 (0:00:00.173) 0:00:42.848 ******* 2026-02-27 00:44:55.570847 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5630d52f-55a8-52f3-8c7d-90d730eab2c2'}})  2026-02-27 00:44:55.570858 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e90026b5-6780-5a31-9cea-c7916e7559fe'}})  2026-02-27 00:44:55.570870 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:44:55.570881 | orchestrator | 2026-02-27 00:44:55.570893 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-27 00:44:55.570905 | orchestrator | Friday 27 February 2026 00:44:52 +0000 (0:00:00.157) 0:00:43.005 ******* 2026-02-27 00:44:55.570916 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:44:55.570927 | orchestrator | 2026-02-27 00:44:55.570938 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-27 00:44:55.570951 | orchestrator | Friday 27 February 2026 00:44:52 +0000 (0:00:00.131) 0:00:43.137 ******* 2026-02-27 00:44:55.570963 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:44:55.570974 | orchestrator | 2026-02-27 00:44:55.570986 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-27 00:44:55.570999 | orchestrator | Friday 27 February 2026 00:44:52 +0000 (0:00:00.148) 0:00:43.286 ******* 2026-02-27 00:44:55.571012 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:44:55.571024 | orchestrator | 2026-02-27 00:44:55.571035 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-27 00:44:55.571043 | orchestrator | Friday 27 February 2026 00:44:53 +0000 (0:00:00.139) 0:00:43.426 ******* 2026-02-27 00:44:55.571051 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:44:55.571059 | orchestrator | 2026-02-27 00:44:55.571072 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-27 00:44:55.571087 | orchestrator | Friday 27 February 2026 00:44:53 +0000 (0:00:00.150) 0:00:43.576 ******* 2026-02-27 00:44:55.571104 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:44:55.571116 | orchestrator | 2026-02-27 00:44:55.571128 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-27 00:44:55.571139 | orchestrator | Friday 27 February 2026 00:44:53 +0000 (0:00:00.152) 0:00:43.729 ******* 2026-02-27 00:44:55.571151 | orchestrator | ok: [testbed-node-5] => { 2026-02-27 00:44:55.571163 | orchestrator |  "ceph_osd_devices": { 2026-02-27 00:44:55.571174 | orchestrator |  "sdb": { 2026-02-27 00:44:55.571205 | orchestrator |  "osd_lvm_uuid": "5630d52f-55a8-52f3-8c7d-90d730eab2c2" 2026-02-27 00:44:55.571218 | orchestrator |  }, 2026-02-27 00:44:55.571230 | orchestrator |  "sdc": { 2026-02-27 00:44:55.571242 | orchestrator |  "osd_lvm_uuid": "e90026b5-6780-5a31-9cea-c7916e7559fe" 2026-02-27 00:44:55.571254 | orchestrator |  } 2026-02-27 00:44:55.571267 | orchestrator |  } 2026-02-27 00:44:55.571279 | orchestrator | } 2026-02-27 00:44:55.571291 | orchestrator | 2026-02-27 00:44:55.571303 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-27 00:44:55.571310 | orchestrator | Friday 27 February 2026 00:44:53 +0000 (0:00:00.151) 0:00:43.880 ******* 2026-02-27 00:44:55.571318 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:44:55.571325 | orchestrator | 2026-02-27 00:44:55.571332 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-27 00:44:55.571339 | orchestrator | Friday 27 February 2026 00:44:53 +0000 (0:00:00.389) 0:00:44.270 ******* 2026-02-27 00:44:55.571346 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:44:55.571364 | orchestrator | 2026-02-27 00:44:55.571371 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-27 00:44:55.571378 | orchestrator | Friday 27 February 2026 00:44:54 +0000 (0:00:00.138) 0:00:44.408 ******* 2026-02-27 00:44:55.571385 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:44:55.571392 | orchestrator | 2026-02-27 00:44:55.571400 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-27 00:44:55.571407 | orchestrator | Friday 27 February 2026 00:44:54 +0000 (0:00:00.143) 0:00:44.551 ******* 2026-02-27 00:44:55.571414 | orchestrator | changed: [testbed-node-5] => { 2026-02-27 00:44:55.571421 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-27 00:44:55.571428 | orchestrator |  "ceph_osd_devices": { 2026-02-27 00:44:55.571436 | orchestrator |  "sdb": { 2026-02-27 00:44:55.571446 | orchestrator |  "osd_lvm_uuid": "5630d52f-55a8-52f3-8c7d-90d730eab2c2" 2026-02-27 00:44:55.571458 | orchestrator |  }, 2026-02-27 00:44:55.571468 | orchestrator |  "sdc": { 2026-02-27 00:44:55.571485 | orchestrator |  "osd_lvm_uuid": "e90026b5-6780-5a31-9cea-c7916e7559fe" 2026-02-27 00:44:55.571498 | orchestrator |  } 2026-02-27 00:44:55.571509 | orchestrator |  }, 2026-02-27 00:44:55.571520 | orchestrator |  "lvm_volumes": [ 2026-02-27 00:44:55.571531 | orchestrator |  { 2026-02-27 00:44:55.571543 | orchestrator |  "data": "osd-block-5630d52f-55a8-52f3-8c7d-90d730eab2c2", 2026-02-27 00:44:55.571556 | orchestrator |  "data_vg": "ceph-5630d52f-55a8-52f3-8c7d-90d730eab2c2" 2026-02-27 00:44:55.571568 | orchestrator |  }, 2026-02-27 00:44:55.571580 | orchestrator |  { 2026-02-27 00:44:55.571592 | orchestrator |  "data": "osd-block-e90026b5-6780-5a31-9cea-c7916e7559fe", 2026-02-27 00:44:55.571616 | orchestrator |  "data_vg": "ceph-e90026b5-6780-5a31-9cea-c7916e7559fe" 2026-02-27 00:44:55.571677 | orchestrator |  } 2026-02-27 00:44:55.571686 | orchestrator |  ] 2026-02-27 00:44:55.571697 | orchestrator |  } 2026-02-27 00:44:55.571705 | orchestrator | } 2026-02-27 00:44:55.571712 | orchestrator | 2026-02-27 00:44:55.571719 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-27 00:44:55.571727 | orchestrator | Friday 27 February 2026 00:44:54 +0000 (0:00:00.252) 0:00:44.803 ******* 2026-02-27 00:44:55.571734 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-27 00:44:55.571741 | orchestrator | 2026-02-27 00:44:55.571748 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 00:44:55.571756 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-27 00:44:55.571765 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-27 00:44:55.571772 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-27 00:44:55.571780 | orchestrator | 2026-02-27 00:44:55.571787 | orchestrator | 2026-02-27 00:44:55.571794 | orchestrator | 2026-02-27 00:44:55.571801 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 00:44:55.571808 | orchestrator | Friday 27 February 2026 00:44:55 +0000 (0:00:01.078) 0:00:45.882 ******* 2026-02-27 00:44:55.571816 | orchestrator | =============================================================================== 2026-02-27 00:44:55.571823 | orchestrator | Write configuration file ------------------------------------------------ 4.31s 2026-02-27 00:44:55.571830 | orchestrator | Add known partitions to the list of available block devices ------------- 1.43s 2026-02-27 00:44:55.571837 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.28s 2026-02-27 00:44:55.571844 | orchestrator | Add known links to the list of available block devices ------------------ 1.26s 2026-02-27 00:44:55.571858 | orchestrator | Add known partitions to the list of available block devices ------------- 1.20s 2026-02-27 00:44:55.571867 | orchestrator | Add known partitions to the list of available block devices ------------- 1.01s 2026-02-27 00:44:55.571882 | orchestrator | Add known links to the list of available block devices ------------------ 1.00s 2026-02-27 00:44:55.571899 | orchestrator | Add known links to the list of available block devices ------------------ 0.94s 2026-02-27 00:44:55.571911 | orchestrator | Print configuration data ------------------------------------------------ 0.91s 2026-02-27 00:44:55.571923 | orchestrator | Get initial list of available block devices ----------------------------- 0.82s 2026-02-27 00:44:55.571934 | orchestrator | Add known partitions to the list of available block devices ------------- 0.80s 2026-02-27 00:44:55.571946 | orchestrator | Add known partitions to the list of available block devices ------------- 0.79s 2026-02-27 00:44:55.571959 | orchestrator | Add known partitions to the list of available block devices ------------- 0.77s 2026-02-27 00:44:55.571980 | orchestrator | Add known partitions to the list of available block devices ------------- 0.72s 2026-02-27 00:44:55.974549 | orchestrator | Set DB devices config data ---------------------------------------------- 0.72s 2026-02-27 00:44:55.974673 | orchestrator | Add known links to the list of available block devices ------------------ 0.72s 2026-02-27 00:44:55.974687 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.71s 2026-02-27 00:44:55.974697 | orchestrator | Print WAL devices ------------------------------------------------------- 0.68s 2026-02-27 00:44:55.974707 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2026-02-27 00:44:55.974717 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.56s 2026-02-27 00:45:18.866886 | orchestrator | 2026-02-27 00:45:18 | INFO  | Task 0cd83b78-a97e-4911-8b17-797f3d232277 (sync inventory) is running in background. Output coming soon. 2026-02-27 00:45:46.010359 | orchestrator | 2026-02-27 00:45:20 | INFO  | Starting group_vars file reorganization 2026-02-27 00:45:46.010516 | orchestrator | 2026-02-27 00:45:20 | INFO  | Moved 0 file(s) to their respective directories 2026-02-27 00:45:46.010548 | orchestrator | 2026-02-27 00:45:20 | INFO  | Group_vars file reorganization completed 2026-02-27 00:45:46.010566 | orchestrator | 2026-02-27 00:45:23 | INFO  | Starting variable preparation from inventory 2026-02-27 00:45:46.010651 | orchestrator | 2026-02-27 00:45:27 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-02-27 00:45:46.010672 | orchestrator | 2026-02-27 00:45:27 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-02-27 00:45:46.010692 | orchestrator | 2026-02-27 00:45:27 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-02-27 00:45:46.010710 | orchestrator | 2026-02-27 00:45:27 | INFO  | 3 file(s) written, 6 host(s) processed 2026-02-27 00:45:46.010728 | orchestrator | 2026-02-27 00:45:27 | INFO  | Variable preparation completed 2026-02-27 00:45:46.010741 | orchestrator | 2026-02-27 00:45:29 | INFO  | Starting inventory overwrite handling 2026-02-27 00:45:46.010753 | orchestrator | 2026-02-27 00:45:29 | INFO  | Handling group overwrites in 99-overwrite 2026-02-27 00:45:46.010764 | orchestrator | 2026-02-27 00:45:29 | INFO  | Removing group frr:children from 60-generic 2026-02-27 00:45:46.010775 | orchestrator | 2026-02-27 00:45:29 | INFO  | Removing group netbird:children from 50-infrastructure 2026-02-27 00:45:46.010786 | orchestrator | 2026-02-27 00:45:29 | INFO  | Removing group ceph-rgw from 50-ceph 2026-02-27 00:45:46.010797 | orchestrator | 2026-02-27 00:45:29 | INFO  | Removing group ceph-mds from 50-ceph 2026-02-27 00:45:46.010808 | orchestrator | 2026-02-27 00:45:29 | INFO  | Handling group overwrites in 20-roles 2026-02-27 00:45:46.010819 | orchestrator | 2026-02-27 00:45:29 | INFO  | Removing group k3s_node from 50-infrastructure 2026-02-27 00:45:46.010857 | orchestrator | 2026-02-27 00:45:29 | INFO  | Removed 5 group(s) in total 2026-02-27 00:45:46.010872 | orchestrator | 2026-02-27 00:45:29 | INFO  | Inventory overwrite handling completed 2026-02-27 00:45:46.010891 | orchestrator | 2026-02-27 00:45:30 | INFO  | Starting merge of inventory files 2026-02-27 00:45:46.010910 | orchestrator | 2026-02-27 00:45:30 | INFO  | Inventory files merged successfully 2026-02-27 00:45:46.010929 | orchestrator | 2026-02-27 00:45:34 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-02-27 00:45:46.010948 | orchestrator | 2026-02-27 00:45:44 | INFO  | Successfully wrote ClusterShell configuration 2026-02-27 00:45:46.010968 | orchestrator | [master 12e1ed6] 2026-02-27-00-45 2026-02-27 00:45:46.010989 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-02-27 00:45:48.703519 | orchestrator | 2026-02-27 00:45:48 | INFO  | Task 7572dc96-4fd3-4951-a661-e8e53b67d764 (ceph-create-lvm-devices) was prepared for execution. 2026-02-27 00:45:48.703656 | orchestrator | 2026-02-27 00:45:48 | INFO  | It takes a moment until task 7572dc96-4fd3-4951-a661-e8e53b67d764 (ceph-create-lvm-devices) has been started and output is visible here. 2026-02-27 00:46:03.928245 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-27 00:46:03.928329 | orchestrator | 2.16.14 2026-02-27 00:46:03.928344 | orchestrator | 2026-02-27 00:46:03.928356 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-27 00:46:03.928367 | orchestrator | 2026-02-27 00:46:03.928377 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-27 00:46:03.928388 | orchestrator | Friday 27 February 2026 00:45:55 +0000 (0:00:00.384) 0:00:00.384 ******* 2026-02-27 00:46:03.928411 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-27 00:46:03.928422 | orchestrator | 2026-02-27 00:46:03.928431 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-27 00:46:03.928442 | orchestrator | Friday 27 February 2026 00:45:56 +0000 (0:00:00.303) 0:00:00.688 ******* 2026-02-27 00:46:03.928451 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:46:03.928461 | orchestrator | 2026-02-27 00:46:03.928470 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:46:03.928482 | orchestrator | Friday 27 February 2026 00:45:56 +0000 (0:00:00.253) 0:00:00.941 ******* 2026-02-27 00:46:03.928492 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-02-27 00:46:03.928502 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-02-27 00:46:03.928512 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-02-27 00:46:03.928522 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-02-27 00:46:03.928532 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-02-27 00:46:03.928541 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-02-27 00:46:03.928551 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-02-27 00:46:03.928597 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-02-27 00:46:03.928608 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-02-27 00:46:03.928635 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-02-27 00:46:03.928647 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-02-27 00:46:03.928659 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-02-27 00:46:03.928670 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-02-27 00:46:03.928701 | orchestrator | 2026-02-27 00:46:03.928713 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:46:03.928725 | orchestrator | Friday 27 February 2026 00:45:57 +0000 (0:00:00.668) 0:00:01.609 ******* 2026-02-27 00:46:03.928737 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:03.928748 | orchestrator | 2026-02-27 00:46:03.928760 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:46:03.928771 | orchestrator | Friday 27 February 2026 00:45:57 +0000 (0:00:00.243) 0:00:01.852 ******* 2026-02-27 00:46:03.928783 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:03.928794 | orchestrator | 2026-02-27 00:46:03.928805 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:46:03.928822 | orchestrator | Friday 27 February 2026 00:45:57 +0000 (0:00:00.208) 0:00:02.061 ******* 2026-02-27 00:46:03.928834 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:03.928845 | orchestrator | 2026-02-27 00:46:03.928857 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:46:03.928869 | orchestrator | Friday 27 February 2026 00:45:57 +0000 (0:00:00.187) 0:00:02.248 ******* 2026-02-27 00:46:03.928880 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:03.928892 | orchestrator | 2026-02-27 00:46:03.928904 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:46:03.928917 | orchestrator | Friday 27 February 2026 00:45:57 +0000 (0:00:00.250) 0:00:02.498 ******* 2026-02-27 00:46:03.928928 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:03.928940 | orchestrator | 2026-02-27 00:46:03.928951 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:46:03.928963 | orchestrator | Friday 27 February 2026 00:45:58 +0000 (0:00:00.230) 0:00:02.729 ******* 2026-02-27 00:46:03.928975 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:03.928986 | orchestrator | 2026-02-27 00:46:03.928998 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:46:03.929010 | orchestrator | Friday 27 February 2026 00:45:58 +0000 (0:00:00.239) 0:00:02.968 ******* 2026-02-27 00:46:03.929021 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:03.929033 | orchestrator | 2026-02-27 00:46:03.929045 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:46:03.929057 | orchestrator | Friday 27 February 2026 00:45:58 +0000 (0:00:00.299) 0:00:03.267 ******* 2026-02-27 00:46:03.929068 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:03.929080 | orchestrator | 2026-02-27 00:46:03.929092 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:46:03.929104 | orchestrator | Friday 27 February 2026 00:45:58 +0000 (0:00:00.234) 0:00:03.502 ******* 2026-02-27 00:46:03.929116 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3470a12e-124f-400f-8df7-ef48fe544e4b) 2026-02-27 00:46:03.929128 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3470a12e-124f-400f-8df7-ef48fe544e4b) 2026-02-27 00:46:03.929139 | orchestrator | 2026-02-27 00:46:03.929151 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:46:03.929180 | orchestrator | Friday 27 February 2026 00:45:59 +0000 (0:00:00.528) 0:00:04.031 ******* 2026-02-27 00:46:03.929192 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c4916fb9-2e52-4262-9b09-55f9a233c222) 2026-02-27 00:46:03.929203 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c4916fb9-2e52-4262-9b09-55f9a233c222) 2026-02-27 00:46:03.929214 | orchestrator | 2026-02-27 00:46:03.929226 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:46:03.929238 | orchestrator | Friday 27 February 2026 00:46:00 +0000 (0:00:00.712) 0:00:04.744 ******* 2026-02-27 00:46:03.929249 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_31dfd5e5-18cf-471e-b1c7-8ca54ae9145c) 2026-02-27 00:46:03.929261 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_31dfd5e5-18cf-471e-b1c7-8ca54ae9145c) 2026-02-27 00:46:03.929282 | orchestrator | 2026-02-27 00:46:03.929293 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:46:03.929305 | orchestrator | Friday 27 February 2026 00:46:01 +0000 (0:00:00.818) 0:00:05.562 ******* 2026-02-27 00:46:03.929317 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_7c486bab-939d-4b28-a8a9-5aea680a535b) 2026-02-27 00:46:03.929329 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_7c486bab-939d-4b28-a8a9-5aea680a535b) 2026-02-27 00:46:03.929340 | orchestrator | 2026-02-27 00:46:03.929352 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:46:03.929363 | orchestrator | Friday 27 February 2026 00:46:01 +0000 (0:00:00.740) 0:00:06.303 ******* 2026-02-27 00:46:03.929374 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-27 00:46:03.929386 | orchestrator | 2026-02-27 00:46:03.929397 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:46:03.929409 | orchestrator | Friday 27 February 2026 00:46:02 +0000 (0:00:00.335) 0:00:06.639 ******* 2026-02-27 00:46:03.929420 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-02-27 00:46:03.929432 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-02-27 00:46:03.929443 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-02-27 00:46:03.929510 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-02-27 00:46:03.929525 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-02-27 00:46:03.929536 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-02-27 00:46:03.929548 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-02-27 00:46:03.929575 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-02-27 00:46:03.929587 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-02-27 00:46:03.929599 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-02-27 00:46:03.929610 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-02-27 00:46:03.929622 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-02-27 00:46:03.929633 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-02-27 00:46:03.929645 | orchestrator | 2026-02-27 00:46:03.929656 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:46:03.929668 | orchestrator | Friday 27 February 2026 00:46:02 +0000 (0:00:00.435) 0:00:07.074 ******* 2026-02-27 00:46:03.929679 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:03.929691 | orchestrator | 2026-02-27 00:46:03.929703 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:46:03.929714 | orchestrator | Friday 27 February 2026 00:46:02 +0000 (0:00:00.201) 0:00:07.276 ******* 2026-02-27 00:46:03.929725 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:03.929737 | orchestrator | 2026-02-27 00:46:03.929748 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:46:03.929760 | orchestrator | Friday 27 February 2026 00:46:02 +0000 (0:00:00.185) 0:00:07.461 ******* 2026-02-27 00:46:03.929771 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:03.929783 | orchestrator | 2026-02-27 00:46:03.929794 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:46:03.929806 | orchestrator | Friday 27 February 2026 00:46:03 +0000 (0:00:00.200) 0:00:07.662 ******* 2026-02-27 00:46:03.929817 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:03.929836 | orchestrator | 2026-02-27 00:46:03.929848 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:46:03.929859 | orchestrator | Friday 27 February 2026 00:46:03 +0000 (0:00:00.205) 0:00:07.868 ******* 2026-02-27 00:46:03.929871 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:03.929882 | orchestrator | 2026-02-27 00:46:03.929894 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:46:03.929905 | orchestrator | Friday 27 February 2026 00:46:03 +0000 (0:00:00.198) 0:00:08.067 ******* 2026-02-27 00:46:03.929917 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:03.929928 | orchestrator | 2026-02-27 00:46:03.929939 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:46:03.929951 | orchestrator | Friday 27 February 2026 00:46:03 +0000 (0:00:00.188) 0:00:08.255 ******* 2026-02-27 00:46:03.929963 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:03.929974 | orchestrator | 2026-02-27 00:46:03.929991 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:46:11.820664 | orchestrator | Friday 27 February 2026 00:46:03 +0000 (0:00:00.218) 0:00:08.474 ******* 2026-02-27 00:46:11.820749 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:11.820765 | orchestrator | 2026-02-27 00:46:11.820778 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:46:11.820788 | orchestrator | Friday 27 February 2026 00:46:04 +0000 (0:00:00.235) 0:00:08.710 ******* 2026-02-27 00:46:11.820800 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-02-27 00:46:11.820812 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-02-27 00:46:11.820822 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-02-27 00:46:11.820833 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-02-27 00:46:11.820844 | orchestrator | 2026-02-27 00:46:11.820855 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:46:11.820866 | orchestrator | Friday 27 February 2026 00:46:05 +0000 (0:00:00.918) 0:00:09.628 ******* 2026-02-27 00:46:11.820877 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:11.820888 | orchestrator | 2026-02-27 00:46:11.820898 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:46:11.820909 | orchestrator | Friday 27 February 2026 00:46:05 +0000 (0:00:00.201) 0:00:09.829 ******* 2026-02-27 00:46:11.820921 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:11.820931 | orchestrator | 2026-02-27 00:46:11.820943 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:46:11.820955 | orchestrator | Friday 27 February 2026 00:46:05 +0000 (0:00:00.224) 0:00:10.054 ******* 2026-02-27 00:46:11.820966 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:11.820978 | orchestrator | 2026-02-27 00:46:11.820990 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:46:11.821001 | orchestrator | Friday 27 February 2026 00:46:05 +0000 (0:00:00.198) 0:00:10.253 ******* 2026-02-27 00:46:11.821012 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:11.821022 | orchestrator | 2026-02-27 00:46:11.821034 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-27 00:46:11.821045 | orchestrator | Friday 27 February 2026 00:46:05 +0000 (0:00:00.201) 0:00:10.454 ******* 2026-02-27 00:46:11.821056 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:11.821066 | orchestrator | 2026-02-27 00:46:11.821078 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-27 00:46:11.821089 | orchestrator | Friday 27 February 2026 00:46:06 +0000 (0:00:00.156) 0:00:10.611 ******* 2026-02-27 00:46:11.821116 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a'}}) 2026-02-27 00:46:11.821127 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '15e091ae-77f4-5dd5-92b2-2aa74778b61d'}}) 2026-02-27 00:46:11.821138 | orchestrator | 2026-02-27 00:46:11.821150 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-27 00:46:11.821181 | orchestrator | Friday 27 February 2026 00:46:06 +0000 (0:00:00.183) 0:00:10.794 ******* 2026-02-27 00:46:11.821193 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a', 'data_vg': 'ceph-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a'}) 2026-02-27 00:46:11.821205 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-15e091ae-77f4-5dd5-92b2-2aa74778b61d', 'data_vg': 'ceph-15e091ae-77f4-5dd5-92b2-2aa74778b61d'}) 2026-02-27 00:46:11.821229 | orchestrator | 2026-02-27 00:46:11.821243 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-27 00:46:11.821260 | orchestrator | Friday 27 February 2026 00:46:08 +0000 (0:00:02.012) 0:00:12.807 ******* 2026-02-27 00:46:11.821310 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a', 'data_vg': 'ceph-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a'})  2026-02-27 00:46:11.821324 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-15e091ae-77f4-5dd5-92b2-2aa74778b61d', 'data_vg': 'ceph-15e091ae-77f4-5dd5-92b2-2aa74778b61d'})  2026-02-27 00:46:11.821362 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:11.821377 | orchestrator | 2026-02-27 00:46:11.821389 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-27 00:46:11.821401 | orchestrator | Friday 27 February 2026 00:46:08 +0000 (0:00:00.141) 0:00:12.948 ******* 2026-02-27 00:46:11.821413 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a', 'data_vg': 'ceph-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a'}) 2026-02-27 00:46:11.821486 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-15e091ae-77f4-5dd5-92b2-2aa74778b61d', 'data_vg': 'ceph-15e091ae-77f4-5dd5-92b2-2aa74778b61d'}) 2026-02-27 00:46:11.821498 | orchestrator | 2026-02-27 00:46:11.821509 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-27 00:46:11.821520 | orchestrator | Friday 27 February 2026 00:46:09 +0000 (0:00:01.438) 0:00:14.386 ******* 2026-02-27 00:46:11.821532 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a', 'data_vg': 'ceph-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a'})  2026-02-27 00:46:11.821543 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-15e091ae-77f4-5dd5-92b2-2aa74778b61d', 'data_vg': 'ceph-15e091ae-77f4-5dd5-92b2-2aa74778b61d'})  2026-02-27 00:46:11.821574 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:11.821586 | orchestrator | 2026-02-27 00:46:11.821597 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-27 00:46:11.821608 | orchestrator | Friday 27 February 2026 00:46:09 +0000 (0:00:00.159) 0:00:14.545 ******* 2026-02-27 00:46:11.821639 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:11.821651 | orchestrator | 2026-02-27 00:46:11.821661 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-27 00:46:11.821672 | orchestrator | Friday 27 February 2026 00:46:10 +0000 (0:00:00.132) 0:00:14.678 ******* 2026-02-27 00:46:11.821683 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a', 'data_vg': 'ceph-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a'})  2026-02-27 00:46:11.821695 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-15e091ae-77f4-5dd5-92b2-2aa74778b61d', 'data_vg': 'ceph-15e091ae-77f4-5dd5-92b2-2aa74778b61d'})  2026-02-27 00:46:11.821707 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:11.821719 | orchestrator | 2026-02-27 00:46:11.821730 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-27 00:46:11.821741 | orchestrator | Friday 27 February 2026 00:46:10 +0000 (0:00:00.329) 0:00:15.008 ******* 2026-02-27 00:46:11.821752 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:11.821763 | orchestrator | 2026-02-27 00:46:11.821775 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-27 00:46:11.821786 | orchestrator | Friday 27 February 2026 00:46:10 +0000 (0:00:00.149) 0:00:15.157 ******* 2026-02-27 00:46:11.821807 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a', 'data_vg': 'ceph-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a'})  2026-02-27 00:46:11.821819 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-15e091ae-77f4-5dd5-92b2-2aa74778b61d', 'data_vg': 'ceph-15e091ae-77f4-5dd5-92b2-2aa74778b61d'})  2026-02-27 00:46:11.821831 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:11.821843 | orchestrator | 2026-02-27 00:46:11.821854 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-27 00:46:11.821865 | orchestrator | Friday 27 February 2026 00:46:10 +0000 (0:00:00.153) 0:00:15.310 ******* 2026-02-27 00:46:11.821877 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:11.821890 | orchestrator | 2026-02-27 00:46:11.821901 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-27 00:46:11.821913 | orchestrator | Friday 27 February 2026 00:46:10 +0000 (0:00:00.143) 0:00:15.454 ******* 2026-02-27 00:46:11.821925 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a', 'data_vg': 'ceph-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a'})  2026-02-27 00:46:11.821936 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-15e091ae-77f4-5dd5-92b2-2aa74778b61d', 'data_vg': 'ceph-15e091ae-77f4-5dd5-92b2-2aa74778b61d'})  2026-02-27 00:46:11.821948 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:11.821959 | orchestrator | 2026-02-27 00:46:11.821972 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-27 00:46:11.821983 | orchestrator | Friday 27 February 2026 00:46:11 +0000 (0:00:00.162) 0:00:15.616 ******* 2026-02-27 00:46:11.821994 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:46:11.822005 | orchestrator | 2026-02-27 00:46:11.822064 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-27 00:46:11.822077 | orchestrator | Friday 27 February 2026 00:46:11 +0000 (0:00:00.139) 0:00:15.755 ******* 2026-02-27 00:46:11.822094 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a', 'data_vg': 'ceph-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a'})  2026-02-27 00:46:11.822106 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-15e091ae-77f4-5dd5-92b2-2aa74778b61d', 'data_vg': 'ceph-15e091ae-77f4-5dd5-92b2-2aa74778b61d'})  2026-02-27 00:46:11.822119 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:11.822130 | orchestrator | 2026-02-27 00:46:11.822141 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-27 00:46:11.822153 | orchestrator | Friday 27 February 2026 00:46:11 +0000 (0:00:00.156) 0:00:15.912 ******* 2026-02-27 00:46:11.822164 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a', 'data_vg': 'ceph-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a'})  2026-02-27 00:46:11.822175 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-15e091ae-77f4-5dd5-92b2-2aa74778b61d', 'data_vg': 'ceph-15e091ae-77f4-5dd5-92b2-2aa74778b61d'})  2026-02-27 00:46:11.822188 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:11.822199 | orchestrator | 2026-02-27 00:46:11.822211 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-27 00:46:11.822223 | orchestrator | Friday 27 February 2026 00:46:11 +0000 (0:00:00.167) 0:00:16.079 ******* 2026-02-27 00:46:11.822234 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a', 'data_vg': 'ceph-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a'})  2026-02-27 00:46:11.822245 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-15e091ae-77f4-5dd5-92b2-2aa74778b61d', 'data_vg': 'ceph-15e091ae-77f4-5dd5-92b2-2aa74778b61d'})  2026-02-27 00:46:11.822257 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:11.822269 | orchestrator | 2026-02-27 00:46:11.822282 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-27 00:46:11.822293 | orchestrator | Friday 27 February 2026 00:46:11 +0000 (0:00:00.161) 0:00:16.241 ******* 2026-02-27 00:46:11.822314 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:11.822326 | orchestrator | 2026-02-27 00:46:11.822338 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-27 00:46:11.822360 | orchestrator | Friday 27 February 2026 00:46:11 +0000 (0:00:00.125) 0:00:16.366 ******* 2026-02-27 00:46:18.147294 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:18.147405 | orchestrator | 2026-02-27 00:46:18.147422 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-27 00:46:18.147434 | orchestrator | Friday 27 February 2026 00:46:11 +0000 (0:00:00.143) 0:00:16.509 ******* 2026-02-27 00:46:18.147445 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:18.147456 | orchestrator | 2026-02-27 00:46:18.147467 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-27 00:46:18.147478 | orchestrator | Friday 27 February 2026 00:46:12 +0000 (0:00:00.167) 0:00:16.677 ******* 2026-02-27 00:46:18.147489 | orchestrator | ok: [testbed-node-3] => { 2026-02-27 00:46:18.147500 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-27 00:46:18.147511 | orchestrator | } 2026-02-27 00:46:18.147522 | orchestrator | 2026-02-27 00:46:18.147533 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-27 00:46:18.147590 | orchestrator | Friday 27 February 2026 00:46:12 +0000 (0:00:00.292) 0:00:16.970 ******* 2026-02-27 00:46:18.147605 | orchestrator | ok: [testbed-node-3] => { 2026-02-27 00:46:18.147616 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-27 00:46:18.147627 | orchestrator | } 2026-02-27 00:46:18.147638 | orchestrator | 2026-02-27 00:46:18.147649 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-27 00:46:18.147660 | orchestrator | Friday 27 February 2026 00:46:12 +0000 (0:00:00.156) 0:00:17.126 ******* 2026-02-27 00:46:18.147671 | orchestrator | ok: [testbed-node-3] => { 2026-02-27 00:46:18.147683 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-27 00:46:18.147694 | orchestrator | } 2026-02-27 00:46:18.147705 | orchestrator | 2026-02-27 00:46:18.147716 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-27 00:46:18.147727 | orchestrator | Friday 27 February 2026 00:46:12 +0000 (0:00:00.129) 0:00:17.255 ******* 2026-02-27 00:46:18.147738 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:46:18.147749 | orchestrator | 2026-02-27 00:46:18.147760 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-27 00:46:18.147771 | orchestrator | Friday 27 February 2026 00:46:13 +0000 (0:00:00.651) 0:00:17.907 ******* 2026-02-27 00:46:18.147782 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:46:18.147792 | orchestrator | 2026-02-27 00:46:18.147803 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-27 00:46:18.147814 | orchestrator | Friday 27 February 2026 00:46:13 +0000 (0:00:00.512) 0:00:18.419 ******* 2026-02-27 00:46:18.147825 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:46:18.147835 | orchestrator | 2026-02-27 00:46:18.147846 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-27 00:46:18.147857 | orchestrator | Friday 27 February 2026 00:46:14 +0000 (0:00:00.506) 0:00:18.926 ******* 2026-02-27 00:46:18.147868 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:46:18.147879 | orchestrator | 2026-02-27 00:46:18.147889 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-27 00:46:18.147900 | orchestrator | Friday 27 February 2026 00:46:14 +0000 (0:00:00.118) 0:00:19.044 ******* 2026-02-27 00:46:18.147911 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:18.147923 | orchestrator | 2026-02-27 00:46:18.147943 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-27 00:46:18.147963 | orchestrator | Friday 27 February 2026 00:46:14 +0000 (0:00:00.120) 0:00:19.165 ******* 2026-02-27 00:46:18.147982 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:18.148001 | orchestrator | 2026-02-27 00:46:18.148020 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-27 00:46:18.148064 | orchestrator | Friday 27 February 2026 00:46:14 +0000 (0:00:00.106) 0:00:19.271 ******* 2026-02-27 00:46:18.148085 | orchestrator | ok: [testbed-node-3] => { 2026-02-27 00:46:18.148103 | orchestrator |  "vgs_report": { 2026-02-27 00:46:18.148122 | orchestrator |  "vg": [] 2026-02-27 00:46:18.148141 | orchestrator |  } 2026-02-27 00:46:18.148160 | orchestrator | } 2026-02-27 00:46:18.148177 | orchestrator | 2026-02-27 00:46:18.148197 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-27 00:46:18.148218 | orchestrator | Friday 27 February 2026 00:46:14 +0000 (0:00:00.134) 0:00:19.406 ******* 2026-02-27 00:46:18.148236 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:18.148255 | orchestrator | 2026-02-27 00:46:18.148293 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-27 00:46:18.148306 | orchestrator | Friday 27 February 2026 00:46:14 +0000 (0:00:00.133) 0:00:19.539 ******* 2026-02-27 00:46:18.148317 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:18.148327 | orchestrator | 2026-02-27 00:46:18.148338 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-27 00:46:18.148349 | orchestrator | Friday 27 February 2026 00:46:15 +0000 (0:00:00.133) 0:00:19.672 ******* 2026-02-27 00:46:18.148360 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:18.148370 | orchestrator | 2026-02-27 00:46:18.148381 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-27 00:46:18.148392 | orchestrator | Friday 27 February 2026 00:46:15 +0000 (0:00:00.291) 0:00:19.964 ******* 2026-02-27 00:46:18.148402 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:18.148413 | orchestrator | 2026-02-27 00:46:18.148424 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-27 00:46:18.148435 | orchestrator | Friday 27 February 2026 00:46:15 +0000 (0:00:00.165) 0:00:20.129 ******* 2026-02-27 00:46:18.148446 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:18.148457 | orchestrator | 2026-02-27 00:46:18.148467 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-27 00:46:18.148478 | orchestrator | Friday 27 February 2026 00:46:15 +0000 (0:00:00.187) 0:00:20.317 ******* 2026-02-27 00:46:18.148489 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:18.148500 | orchestrator | 2026-02-27 00:46:18.148510 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-27 00:46:18.148521 | orchestrator | Friday 27 February 2026 00:46:15 +0000 (0:00:00.130) 0:00:20.447 ******* 2026-02-27 00:46:18.148532 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:18.148543 | orchestrator | 2026-02-27 00:46:18.148586 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-27 00:46:18.148598 | orchestrator | Friday 27 February 2026 00:46:16 +0000 (0:00:00.159) 0:00:20.607 ******* 2026-02-27 00:46:18.148630 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:18.148642 | orchestrator | 2026-02-27 00:46:18.148653 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-27 00:46:18.148664 | orchestrator | Friday 27 February 2026 00:46:16 +0000 (0:00:00.116) 0:00:20.724 ******* 2026-02-27 00:46:18.148675 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:18.148686 | orchestrator | 2026-02-27 00:46:18.148696 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-27 00:46:18.148707 | orchestrator | Friday 27 February 2026 00:46:16 +0000 (0:00:00.135) 0:00:20.860 ******* 2026-02-27 00:46:18.148718 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:18.148728 | orchestrator | 2026-02-27 00:46:18.148739 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-27 00:46:18.148750 | orchestrator | Friday 27 February 2026 00:46:16 +0000 (0:00:00.120) 0:00:20.980 ******* 2026-02-27 00:46:18.148760 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:18.148771 | orchestrator | 2026-02-27 00:46:18.148782 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-27 00:46:18.148792 | orchestrator | Friday 27 February 2026 00:46:16 +0000 (0:00:00.131) 0:00:21.111 ******* 2026-02-27 00:46:18.148814 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:18.148825 | orchestrator | 2026-02-27 00:46:18.148836 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-27 00:46:18.148847 | orchestrator | Friday 27 February 2026 00:46:16 +0000 (0:00:00.128) 0:00:21.240 ******* 2026-02-27 00:46:18.148858 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:18.148868 | orchestrator | 2026-02-27 00:46:18.148879 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-27 00:46:18.148890 | orchestrator | Friday 27 February 2026 00:46:16 +0000 (0:00:00.142) 0:00:21.383 ******* 2026-02-27 00:46:18.148901 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:18.148911 | orchestrator | 2026-02-27 00:46:18.148922 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-27 00:46:18.148933 | orchestrator | Friday 27 February 2026 00:46:16 +0000 (0:00:00.139) 0:00:21.523 ******* 2026-02-27 00:46:18.148945 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a', 'data_vg': 'ceph-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a'})  2026-02-27 00:46:18.148957 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-15e091ae-77f4-5dd5-92b2-2aa74778b61d', 'data_vg': 'ceph-15e091ae-77f4-5dd5-92b2-2aa74778b61d'})  2026-02-27 00:46:18.148968 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:18.148979 | orchestrator | 2026-02-27 00:46:18.148990 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-27 00:46:18.149001 | orchestrator | Friday 27 February 2026 00:46:17 +0000 (0:00:00.334) 0:00:21.857 ******* 2026-02-27 00:46:18.149012 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a', 'data_vg': 'ceph-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a'})  2026-02-27 00:46:18.149022 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-15e091ae-77f4-5dd5-92b2-2aa74778b61d', 'data_vg': 'ceph-15e091ae-77f4-5dd5-92b2-2aa74778b61d'})  2026-02-27 00:46:18.149033 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:18.149045 | orchestrator | 2026-02-27 00:46:18.149055 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-27 00:46:18.149071 | orchestrator | Friday 27 February 2026 00:46:17 +0000 (0:00:00.172) 0:00:22.030 ******* 2026-02-27 00:46:18.149083 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a', 'data_vg': 'ceph-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a'})  2026-02-27 00:46:18.149094 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-15e091ae-77f4-5dd5-92b2-2aa74778b61d', 'data_vg': 'ceph-15e091ae-77f4-5dd5-92b2-2aa74778b61d'})  2026-02-27 00:46:18.149105 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:18.149115 | orchestrator | 2026-02-27 00:46:18.149126 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-27 00:46:18.149137 | orchestrator | Friday 27 February 2026 00:46:17 +0000 (0:00:00.185) 0:00:22.216 ******* 2026-02-27 00:46:18.149148 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a', 'data_vg': 'ceph-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a'})  2026-02-27 00:46:18.149159 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-15e091ae-77f4-5dd5-92b2-2aa74778b61d', 'data_vg': 'ceph-15e091ae-77f4-5dd5-92b2-2aa74778b61d'})  2026-02-27 00:46:18.149170 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:18.149180 | orchestrator | 2026-02-27 00:46:18.149191 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-27 00:46:18.149202 | orchestrator | Friday 27 February 2026 00:46:17 +0000 (0:00:00.180) 0:00:22.397 ******* 2026-02-27 00:46:18.149212 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a', 'data_vg': 'ceph-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a'})  2026-02-27 00:46:18.149223 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-15e091ae-77f4-5dd5-92b2-2aa74778b61d', 'data_vg': 'ceph-15e091ae-77f4-5dd5-92b2-2aa74778b61d'})  2026-02-27 00:46:18.149240 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:18.149251 | orchestrator | 2026-02-27 00:46:18.149261 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-27 00:46:18.149272 | orchestrator | Friday 27 February 2026 00:46:17 +0000 (0:00:00.140) 0:00:22.537 ******* 2026-02-27 00:46:18.149290 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a', 'data_vg': 'ceph-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a'})  2026-02-27 00:46:23.043029 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-15e091ae-77f4-5dd5-92b2-2aa74778b61d', 'data_vg': 'ceph-15e091ae-77f4-5dd5-92b2-2aa74778b61d'})  2026-02-27 00:46:23.043104 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:23.043111 | orchestrator | 2026-02-27 00:46:23.043116 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-27 00:46:23.043122 | orchestrator | Friday 27 February 2026 00:46:18 +0000 (0:00:00.159) 0:00:22.697 ******* 2026-02-27 00:46:23.043127 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a', 'data_vg': 'ceph-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a'})  2026-02-27 00:46:23.043131 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-15e091ae-77f4-5dd5-92b2-2aa74778b61d', 'data_vg': 'ceph-15e091ae-77f4-5dd5-92b2-2aa74778b61d'})  2026-02-27 00:46:23.043136 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:23.043139 | orchestrator | 2026-02-27 00:46:23.043144 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-27 00:46:23.043148 | orchestrator | Friday 27 February 2026 00:46:18 +0000 (0:00:00.145) 0:00:22.842 ******* 2026-02-27 00:46:23.043152 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a', 'data_vg': 'ceph-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a'})  2026-02-27 00:46:23.043156 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-15e091ae-77f4-5dd5-92b2-2aa74778b61d', 'data_vg': 'ceph-15e091ae-77f4-5dd5-92b2-2aa74778b61d'})  2026-02-27 00:46:23.043161 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:23.043167 | orchestrator | 2026-02-27 00:46:23.043173 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-27 00:46:23.043180 | orchestrator | Friday 27 February 2026 00:46:18 +0000 (0:00:00.134) 0:00:22.977 ******* 2026-02-27 00:46:23.043186 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:46:23.043195 | orchestrator | 2026-02-27 00:46:23.043200 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-27 00:46:23.043205 | orchestrator | Friday 27 February 2026 00:46:18 +0000 (0:00:00.494) 0:00:23.471 ******* 2026-02-27 00:46:23.043208 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:46:23.043212 | orchestrator | 2026-02-27 00:46:23.043216 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-27 00:46:23.043220 | orchestrator | Friday 27 February 2026 00:46:19 +0000 (0:00:00.554) 0:00:24.026 ******* 2026-02-27 00:46:23.043224 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:46:23.043227 | orchestrator | 2026-02-27 00:46:23.043231 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-27 00:46:23.043235 | orchestrator | Friday 27 February 2026 00:46:19 +0000 (0:00:00.149) 0:00:24.175 ******* 2026-02-27 00:46:23.043239 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-15e091ae-77f4-5dd5-92b2-2aa74778b61d', 'vg_name': 'ceph-15e091ae-77f4-5dd5-92b2-2aa74778b61d'}) 2026-02-27 00:46:23.043244 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a', 'vg_name': 'ceph-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a'}) 2026-02-27 00:46:23.043248 | orchestrator | 2026-02-27 00:46:23.043252 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-27 00:46:23.043256 | orchestrator | Friday 27 February 2026 00:46:19 +0000 (0:00:00.160) 0:00:24.336 ******* 2026-02-27 00:46:23.043262 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a', 'data_vg': 'ceph-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a'})  2026-02-27 00:46:23.043291 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-15e091ae-77f4-5dd5-92b2-2aa74778b61d', 'data_vg': 'ceph-15e091ae-77f4-5dd5-92b2-2aa74778b61d'})  2026-02-27 00:46:23.043298 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:23.043305 | orchestrator | 2026-02-27 00:46:23.043312 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-27 00:46:23.043318 | orchestrator | Friday 27 February 2026 00:46:20 +0000 (0:00:00.303) 0:00:24.639 ******* 2026-02-27 00:46:23.043325 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a', 'data_vg': 'ceph-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a'})  2026-02-27 00:46:23.043330 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-15e091ae-77f4-5dd5-92b2-2aa74778b61d', 'data_vg': 'ceph-15e091ae-77f4-5dd5-92b2-2aa74778b61d'})  2026-02-27 00:46:23.043334 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:23.043337 | orchestrator | 2026-02-27 00:46:23.043342 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-27 00:46:23.043346 | orchestrator | Friday 27 February 2026 00:46:20 +0000 (0:00:00.146) 0:00:24.786 ******* 2026-02-27 00:46:23.043350 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a', 'data_vg': 'ceph-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a'})  2026-02-27 00:46:23.043354 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-15e091ae-77f4-5dd5-92b2-2aa74778b61d', 'data_vg': 'ceph-15e091ae-77f4-5dd5-92b2-2aa74778b61d'})  2026-02-27 00:46:23.043358 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:46:23.043361 | orchestrator | 2026-02-27 00:46:23.043365 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-27 00:46:23.043369 | orchestrator | Friday 27 February 2026 00:46:20 +0000 (0:00:00.148) 0:00:24.935 ******* 2026-02-27 00:46:23.043384 | orchestrator | ok: [testbed-node-3] => { 2026-02-27 00:46:23.043388 | orchestrator |  "lvm_report": { 2026-02-27 00:46:23.043392 | orchestrator |  "lv": [ 2026-02-27 00:46:23.043396 | orchestrator |  { 2026-02-27 00:46:23.043400 | orchestrator |  "lv_name": "osd-block-15e091ae-77f4-5dd5-92b2-2aa74778b61d", 2026-02-27 00:46:23.043405 | orchestrator |  "vg_name": "ceph-15e091ae-77f4-5dd5-92b2-2aa74778b61d" 2026-02-27 00:46:23.043408 | orchestrator |  }, 2026-02-27 00:46:23.043414 | orchestrator |  { 2026-02-27 00:46:23.043420 | orchestrator |  "lv_name": "osd-block-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a", 2026-02-27 00:46:23.043426 | orchestrator |  "vg_name": "ceph-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a" 2026-02-27 00:46:23.043433 | orchestrator |  } 2026-02-27 00:46:23.043437 | orchestrator |  ], 2026-02-27 00:46:23.043441 | orchestrator |  "pv": [ 2026-02-27 00:46:23.043445 | orchestrator |  { 2026-02-27 00:46:23.043448 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-27 00:46:23.043452 | orchestrator |  "vg_name": "ceph-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a" 2026-02-27 00:46:23.043456 | orchestrator |  }, 2026-02-27 00:46:23.043460 | orchestrator |  { 2026-02-27 00:46:23.043463 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-27 00:46:23.043478 | orchestrator |  "vg_name": "ceph-15e091ae-77f4-5dd5-92b2-2aa74778b61d" 2026-02-27 00:46:23.043482 | orchestrator |  } 2026-02-27 00:46:23.043486 | orchestrator |  ] 2026-02-27 00:46:23.043490 | orchestrator |  } 2026-02-27 00:46:23.043494 | orchestrator | } 2026-02-27 00:46:23.043498 | orchestrator | 2026-02-27 00:46:23.043503 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-27 00:46:23.043509 | orchestrator | 2026-02-27 00:46:23.043514 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-27 00:46:23.043521 | orchestrator | Friday 27 February 2026 00:46:20 +0000 (0:00:00.307) 0:00:25.242 ******* 2026-02-27 00:46:23.043533 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-27 00:46:23.043563 | orchestrator | 2026-02-27 00:46:23.043570 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-27 00:46:23.043576 | orchestrator | Friday 27 February 2026 00:46:20 +0000 (0:00:00.237) 0:00:25.480 ******* 2026-02-27 00:46:23.043582 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:46:23.043588 | orchestrator | 2026-02-27 00:46:23.043593 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:46:23.043598 | orchestrator | Friday 27 February 2026 00:46:21 +0000 (0:00:00.221) 0:00:25.702 ******* 2026-02-27 00:46:23.043609 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-02-27 00:46:23.043614 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-02-27 00:46:23.043618 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-02-27 00:46:23.043623 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-02-27 00:46:23.043627 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-02-27 00:46:23.043632 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-02-27 00:46:23.043636 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-02-27 00:46:23.043644 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-02-27 00:46:23.043655 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-02-27 00:46:23.043660 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-02-27 00:46:23.043664 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-02-27 00:46:23.043668 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-02-27 00:46:23.043672 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-02-27 00:46:23.043677 | orchestrator | 2026-02-27 00:46:23.043681 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:46:23.043685 | orchestrator | Friday 27 February 2026 00:46:21 +0000 (0:00:00.371) 0:00:26.074 ******* 2026-02-27 00:46:23.043690 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:23.043694 | orchestrator | 2026-02-27 00:46:23.043699 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:46:23.043703 | orchestrator | Friday 27 February 2026 00:46:21 +0000 (0:00:00.198) 0:00:26.272 ******* 2026-02-27 00:46:23.043707 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:23.043712 | orchestrator | 2026-02-27 00:46:23.043716 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:46:23.043721 | orchestrator | Friday 27 February 2026 00:46:21 +0000 (0:00:00.193) 0:00:26.466 ******* 2026-02-27 00:46:23.043725 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:23.043730 | orchestrator | 2026-02-27 00:46:23.043734 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:46:23.043738 | orchestrator | Friday 27 February 2026 00:46:22 +0000 (0:00:00.503) 0:00:26.969 ******* 2026-02-27 00:46:23.043743 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:23.043748 | orchestrator | 2026-02-27 00:46:23.043754 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:46:23.043760 | orchestrator | Friday 27 February 2026 00:46:22 +0000 (0:00:00.198) 0:00:27.167 ******* 2026-02-27 00:46:23.043767 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:23.043773 | orchestrator | 2026-02-27 00:46:23.043779 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:46:23.043785 | orchestrator | Friday 27 February 2026 00:46:22 +0000 (0:00:00.219) 0:00:27.387 ******* 2026-02-27 00:46:23.043796 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:23.043803 | orchestrator | 2026-02-27 00:46:23.043815 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:46:35.513488 | orchestrator | Friday 27 February 2026 00:46:23 +0000 (0:00:00.203) 0:00:27.590 ******* 2026-02-27 00:46:35.513711 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:35.513743 | orchestrator | 2026-02-27 00:46:35.513767 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:46:35.513789 | orchestrator | Friday 27 February 2026 00:46:23 +0000 (0:00:00.190) 0:00:27.781 ******* 2026-02-27 00:46:35.513808 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:35.513828 | orchestrator | 2026-02-27 00:46:35.513849 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:46:35.513870 | orchestrator | Friday 27 February 2026 00:46:23 +0000 (0:00:00.219) 0:00:28.001 ******* 2026-02-27 00:46:35.513891 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d07f98ad-3d62-49f5-84e9-af5adb521297) 2026-02-27 00:46:35.513913 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d07f98ad-3d62-49f5-84e9-af5adb521297) 2026-02-27 00:46:35.513934 | orchestrator | 2026-02-27 00:46:35.513957 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:46:35.513982 | orchestrator | Friday 27 February 2026 00:46:23 +0000 (0:00:00.464) 0:00:28.465 ******* 2026-02-27 00:46:35.514003 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a71caac6-92e2-45f9-9373-56e68f91355d) 2026-02-27 00:46:35.514101 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a71caac6-92e2-45f9-9373-56e68f91355d) 2026-02-27 00:46:35.514124 | orchestrator | 2026-02-27 00:46:35.514147 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:46:35.514171 | orchestrator | Friday 27 February 2026 00:46:24 +0000 (0:00:00.479) 0:00:28.945 ******* 2026-02-27 00:46:35.514192 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e3da6966-e430-4abd-922c-0deb6c0107da) 2026-02-27 00:46:35.514215 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e3da6966-e430-4abd-922c-0deb6c0107da) 2026-02-27 00:46:35.514236 | orchestrator | 2026-02-27 00:46:35.514257 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:46:35.514277 | orchestrator | Friday 27 February 2026 00:46:24 +0000 (0:00:00.461) 0:00:29.406 ******* 2026-02-27 00:46:35.514296 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_94dd7bd0-cf74-4f65-8a31-220357cecc47) 2026-02-27 00:46:35.514314 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_94dd7bd0-cf74-4f65-8a31-220357cecc47) 2026-02-27 00:46:35.514333 | orchestrator | 2026-02-27 00:46:35.514350 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:46:35.514368 | orchestrator | Friday 27 February 2026 00:46:25 +0000 (0:00:00.697) 0:00:30.104 ******* 2026-02-27 00:46:35.514386 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-27 00:46:35.514404 | orchestrator | 2026-02-27 00:46:35.514422 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:46:35.514440 | orchestrator | Friday 27 February 2026 00:46:26 +0000 (0:00:00.602) 0:00:30.706 ******* 2026-02-27 00:46:35.514481 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-02-27 00:46:35.514502 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-02-27 00:46:35.514521 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-02-27 00:46:35.514570 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-02-27 00:46:35.514590 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-02-27 00:46:35.514610 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-02-27 00:46:35.514663 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-02-27 00:46:35.514681 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-02-27 00:46:35.514699 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-02-27 00:46:35.514717 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-02-27 00:46:35.514735 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-02-27 00:46:35.514752 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-02-27 00:46:35.514771 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-02-27 00:46:35.514788 | orchestrator | 2026-02-27 00:46:35.514807 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:46:35.514846 | orchestrator | Friday 27 February 2026 00:46:27 +0000 (0:00:00.956) 0:00:31.663 ******* 2026-02-27 00:46:35.514858 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:35.514881 | orchestrator | 2026-02-27 00:46:35.514892 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:46:35.514904 | orchestrator | Friday 27 February 2026 00:46:27 +0000 (0:00:00.231) 0:00:31.895 ******* 2026-02-27 00:46:35.514915 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:35.514926 | orchestrator | 2026-02-27 00:46:35.514937 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:46:35.514948 | orchestrator | Friday 27 February 2026 00:46:27 +0000 (0:00:00.249) 0:00:32.144 ******* 2026-02-27 00:46:35.514959 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:35.514970 | orchestrator | 2026-02-27 00:46:35.515006 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:46:35.515018 | orchestrator | Friday 27 February 2026 00:46:27 +0000 (0:00:00.236) 0:00:32.380 ******* 2026-02-27 00:46:35.515028 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:35.515039 | orchestrator | 2026-02-27 00:46:35.515050 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:46:35.515061 | orchestrator | Friday 27 February 2026 00:46:28 +0000 (0:00:00.290) 0:00:32.671 ******* 2026-02-27 00:46:35.515071 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:35.515082 | orchestrator | 2026-02-27 00:46:35.515092 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:46:35.515103 | orchestrator | Friday 27 February 2026 00:46:28 +0000 (0:00:00.222) 0:00:32.894 ******* 2026-02-27 00:46:35.515114 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:35.515124 | orchestrator | 2026-02-27 00:46:35.515135 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:46:35.515146 | orchestrator | Friday 27 February 2026 00:46:28 +0000 (0:00:00.212) 0:00:33.106 ******* 2026-02-27 00:46:35.515156 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:35.515167 | orchestrator | 2026-02-27 00:46:35.515178 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:46:35.515189 | orchestrator | Friday 27 February 2026 00:46:28 +0000 (0:00:00.301) 0:00:33.408 ******* 2026-02-27 00:46:35.515199 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:35.515210 | orchestrator | 2026-02-27 00:46:35.515221 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:46:35.515231 | orchestrator | Friday 27 February 2026 00:46:29 +0000 (0:00:00.221) 0:00:33.630 ******* 2026-02-27 00:46:35.515242 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-02-27 00:46:35.515253 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-02-27 00:46:35.515264 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-02-27 00:46:35.515275 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-02-27 00:46:35.515286 | orchestrator | 2026-02-27 00:46:35.515297 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:46:35.515319 | orchestrator | Friday 27 February 2026 00:46:30 +0000 (0:00:01.066) 0:00:34.696 ******* 2026-02-27 00:46:35.515330 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:35.515340 | orchestrator | 2026-02-27 00:46:35.515351 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:46:35.515362 | orchestrator | Friday 27 February 2026 00:46:30 +0000 (0:00:00.249) 0:00:34.946 ******* 2026-02-27 00:46:35.515372 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:35.515383 | orchestrator | 2026-02-27 00:46:35.515394 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:46:35.515404 | orchestrator | Friday 27 February 2026 00:46:31 +0000 (0:00:00.715) 0:00:35.662 ******* 2026-02-27 00:46:35.515415 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:35.515426 | orchestrator | 2026-02-27 00:46:35.515437 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:46:35.515447 | orchestrator | Friday 27 February 2026 00:46:31 +0000 (0:00:00.292) 0:00:35.954 ******* 2026-02-27 00:46:35.515458 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:35.515469 | orchestrator | 2026-02-27 00:46:35.515479 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-27 00:46:35.515490 | orchestrator | Friday 27 February 2026 00:46:31 +0000 (0:00:00.242) 0:00:36.197 ******* 2026-02-27 00:46:35.515501 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:35.515512 | orchestrator | 2026-02-27 00:46:35.515523 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-27 00:46:35.515561 | orchestrator | Friday 27 February 2026 00:46:31 +0000 (0:00:00.165) 0:00:36.362 ******* 2026-02-27 00:46:35.515581 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'aa250c28-8715-5ad9-8f6a-4b8a4568e8d3'}}) 2026-02-27 00:46:35.515601 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '91c1f24e-fd77-555b-b1fb-5152ae0ce974'}}) 2026-02-27 00:46:35.515618 | orchestrator | 2026-02-27 00:46:35.515634 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-27 00:46:35.515646 | orchestrator | Friday 27 February 2026 00:46:32 +0000 (0:00:00.242) 0:00:36.605 ******* 2026-02-27 00:46:35.515658 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3', 'data_vg': 'ceph-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3'}) 2026-02-27 00:46:35.515671 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-91c1f24e-fd77-555b-b1fb-5152ae0ce974', 'data_vg': 'ceph-91c1f24e-fd77-555b-b1fb-5152ae0ce974'}) 2026-02-27 00:46:35.515682 | orchestrator | 2026-02-27 00:46:35.515692 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-27 00:46:35.515703 | orchestrator | Friday 27 February 2026 00:46:33 +0000 (0:00:01.887) 0:00:38.493 ******* 2026-02-27 00:46:35.515713 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3', 'data_vg': 'ceph-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3'})  2026-02-27 00:46:35.515725 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-91c1f24e-fd77-555b-b1fb-5152ae0ce974', 'data_vg': 'ceph-91c1f24e-fd77-555b-b1fb-5152ae0ce974'})  2026-02-27 00:46:35.515736 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:35.515747 | orchestrator | 2026-02-27 00:46:35.515757 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-27 00:46:35.515768 | orchestrator | Friday 27 February 2026 00:46:34 +0000 (0:00:00.206) 0:00:38.699 ******* 2026-02-27 00:46:35.515779 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3', 'data_vg': 'ceph-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3'}) 2026-02-27 00:46:35.515797 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-91c1f24e-fd77-555b-b1fb-5152ae0ce974', 'data_vg': 'ceph-91c1f24e-fd77-555b-b1fb-5152ae0ce974'}) 2026-02-27 00:46:41.138116 | orchestrator | 2026-02-27 00:46:41.138224 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-27 00:46:41.138263 | orchestrator | Friday 27 February 2026 00:46:35 +0000 (0:00:01.355) 0:00:40.055 ******* 2026-02-27 00:46:41.138288 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3', 'data_vg': 'ceph-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3'})  2026-02-27 00:46:41.138323 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-91c1f24e-fd77-555b-b1fb-5152ae0ce974', 'data_vg': 'ceph-91c1f24e-fd77-555b-b1fb-5152ae0ce974'})  2026-02-27 00:46:41.138335 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:41.138347 | orchestrator | 2026-02-27 00:46:41.138358 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-27 00:46:41.138381 | orchestrator | Friday 27 February 2026 00:46:35 +0000 (0:00:00.163) 0:00:40.218 ******* 2026-02-27 00:46:41.138393 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:41.138404 | orchestrator | 2026-02-27 00:46:41.138415 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-27 00:46:41.138426 | orchestrator | Friday 27 February 2026 00:46:35 +0000 (0:00:00.148) 0:00:40.367 ******* 2026-02-27 00:46:41.138437 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3', 'data_vg': 'ceph-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3'})  2026-02-27 00:46:41.138448 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-91c1f24e-fd77-555b-b1fb-5152ae0ce974', 'data_vg': 'ceph-91c1f24e-fd77-555b-b1fb-5152ae0ce974'})  2026-02-27 00:46:41.138459 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:41.138470 | orchestrator | 2026-02-27 00:46:41.138481 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-27 00:46:41.138492 | orchestrator | Friday 27 February 2026 00:46:35 +0000 (0:00:00.156) 0:00:40.523 ******* 2026-02-27 00:46:41.138502 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:41.138513 | orchestrator | 2026-02-27 00:46:41.138550 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-27 00:46:41.138563 | orchestrator | Friday 27 February 2026 00:46:36 +0000 (0:00:00.156) 0:00:40.680 ******* 2026-02-27 00:46:41.138574 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3', 'data_vg': 'ceph-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3'})  2026-02-27 00:46:41.138585 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-91c1f24e-fd77-555b-b1fb-5152ae0ce974', 'data_vg': 'ceph-91c1f24e-fd77-555b-b1fb-5152ae0ce974'})  2026-02-27 00:46:41.138596 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:41.138609 | orchestrator | 2026-02-27 00:46:41.138622 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-27 00:46:41.138642 | orchestrator | Friday 27 February 2026 00:46:36 +0000 (0:00:00.377) 0:00:41.058 ******* 2026-02-27 00:46:41.138654 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:41.138667 | orchestrator | 2026-02-27 00:46:41.138679 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-27 00:46:41.138704 | orchestrator | Friday 27 February 2026 00:46:36 +0000 (0:00:00.162) 0:00:41.220 ******* 2026-02-27 00:46:41.138716 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3', 'data_vg': 'ceph-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3'})  2026-02-27 00:46:41.138729 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-91c1f24e-fd77-555b-b1fb-5152ae0ce974', 'data_vg': 'ceph-91c1f24e-fd77-555b-b1fb-5152ae0ce974'})  2026-02-27 00:46:41.138741 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:41.138754 | orchestrator | 2026-02-27 00:46:41.138766 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-27 00:46:41.138779 | orchestrator | Friday 27 February 2026 00:46:36 +0000 (0:00:00.157) 0:00:41.378 ******* 2026-02-27 00:46:41.138792 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:46:41.138806 | orchestrator | 2026-02-27 00:46:41.138825 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-27 00:46:41.138856 | orchestrator | Friday 27 February 2026 00:46:36 +0000 (0:00:00.139) 0:00:41.518 ******* 2026-02-27 00:46:41.138875 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3', 'data_vg': 'ceph-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3'})  2026-02-27 00:46:41.138895 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-91c1f24e-fd77-555b-b1fb-5152ae0ce974', 'data_vg': 'ceph-91c1f24e-fd77-555b-b1fb-5152ae0ce974'})  2026-02-27 00:46:41.138915 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:41.138934 | orchestrator | 2026-02-27 00:46:41.138952 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-27 00:46:41.138971 | orchestrator | Friday 27 February 2026 00:46:37 +0000 (0:00:00.156) 0:00:41.674 ******* 2026-02-27 00:46:41.138982 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3', 'data_vg': 'ceph-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3'})  2026-02-27 00:46:41.138993 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-91c1f24e-fd77-555b-b1fb-5152ae0ce974', 'data_vg': 'ceph-91c1f24e-fd77-555b-b1fb-5152ae0ce974'})  2026-02-27 00:46:41.139004 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:41.139015 | orchestrator | 2026-02-27 00:46:41.139025 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-27 00:46:41.139056 | orchestrator | Friday 27 February 2026 00:46:37 +0000 (0:00:00.163) 0:00:41.838 ******* 2026-02-27 00:46:41.139068 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3', 'data_vg': 'ceph-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3'})  2026-02-27 00:46:41.139079 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-91c1f24e-fd77-555b-b1fb-5152ae0ce974', 'data_vg': 'ceph-91c1f24e-fd77-555b-b1fb-5152ae0ce974'})  2026-02-27 00:46:41.139090 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:41.139100 | orchestrator | 2026-02-27 00:46:41.139111 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-27 00:46:41.139122 | orchestrator | Friday 27 February 2026 00:46:37 +0000 (0:00:00.155) 0:00:41.994 ******* 2026-02-27 00:46:41.139133 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:41.139143 | orchestrator | 2026-02-27 00:46:41.139154 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-27 00:46:41.139165 | orchestrator | Friday 27 February 2026 00:46:37 +0000 (0:00:00.162) 0:00:42.156 ******* 2026-02-27 00:46:41.139176 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:41.139186 | orchestrator | 2026-02-27 00:46:41.139197 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-27 00:46:41.139208 | orchestrator | Friday 27 February 2026 00:46:37 +0000 (0:00:00.149) 0:00:42.306 ******* 2026-02-27 00:46:41.139218 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:41.139229 | orchestrator | 2026-02-27 00:46:41.139240 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-27 00:46:41.139250 | orchestrator | Friday 27 February 2026 00:46:37 +0000 (0:00:00.142) 0:00:42.448 ******* 2026-02-27 00:46:41.139261 | orchestrator | ok: [testbed-node-4] => { 2026-02-27 00:46:41.139272 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-27 00:46:41.139283 | orchestrator | } 2026-02-27 00:46:41.139294 | orchestrator | 2026-02-27 00:46:41.139305 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-27 00:46:41.139316 | orchestrator | Friday 27 February 2026 00:46:38 +0000 (0:00:00.154) 0:00:42.603 ******* 2026-02-27 00:46:41.139326 | orchestrator | ok: [testbed-node-4] => { 2026-02-27 00:46:41.139337 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-27 00:46:41.139348 | orchestrator | } 2026-02-27 00:46:41.139358 | orchestrator | 2026-02-27 00:46:41.139369 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-27 00:46:41.139380 | orchestrator | Friday 27 February 2026 00:46:38 +0000 (0:00:00.153) 0:00:42.756 ******* 2026-02-27 00:46:41.139398 | orchestrator | ok: [testbed-node-4] => { 2026-02-27 00:46:41.139409 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-27 00:46:41.139421 | orchestrator | } 2026-02-27 00:46:41.139431 | orchestrator | 2026-02-27 00:46:41.139442 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-27 00:46:41.139453 | orchestrator | Friday 27 February 2026 00:46:38 +0000 (0:00:00.382) 0:00:43.139 ******* 2026-02-27 00:46:41.139464 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:46:41.139474 | orchestrator | 2026-02-27 00:46:41.139485 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-27 00:46:41.139502 | orchestrator | Friday 27 February 2026 00:46:39 +0000 (0:00:00.514) 0:00:43.654 ******* 2026-02-27 00:46:41.139513 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:46:41.139547 | orchestrator | 2026-02-27 00:46:41.139563 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-27 00:46:41.139575 | orchestrator | Friday 27 February 2026 00:46:39 +0000 (0:00:00.507) 0:00:44.161 ******* 2026-02-27 00:46:41.139585 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:46:41.139596 | orchestrator | 2026-02-27 00:46:41.139607 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-27 00:46:41.139618 | orchestrator | Friday 27 February 2026 00:46:40 +0000 (0:00:00.513) 0:00:44.675 ******* 2026-02-27 00:46:41.139629 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:46:41.139639 | orchestrator | 2026-02-27 00:46:41.139650 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-27 00:46:41.139661 | orchestrator | Friday 27 February 2026 00:46:40 +0000 (0:00:00.140) 0:00:44.816 ******* 2026-02-27 00:46:41.139671 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:41.139682 | orchestrator | 2026-02-27 00:46:41.139693 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-27 00:46:41.139703 | orchestrator | Friday 27 February 2026 00:46:40 +0000 (0:00:00.117) 0:00:44.934 ******* 2026-02-27 00:46:41.139714 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:41.139725 | orchestrator | 2026-02-27 00:46:41.139735 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-27 00:46:41.139746 | orchestrator | Friday 27 February 2026 00:46:40 +0000 (0:00:00.106) 0:00:45.041 ******* 2026-02-27 00:46:41.139757 | orchestrator | ok: [testbed-node-4] => { 2026-02-27 00:46:41.139768 | orchestrator |  "vgs_report": { 2026-02-27 00:46:41.139779 | orchestrator |  "vg": [] 2026-02-27 00:46:41.139795 | orchestrator |  } 2026-02-27 00:46:41.139811 | orchestrator | } 2026-02-27 00:46:41.139831 | orchestrator | 2026-02-27 00:46:41.139857 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-27 00:46:41.139877 | orchestrator | Friday 27 February 2026 00:46:40 +0000 (0:00:00.129) 0:00:45.170 ******* 2026-02-27 00:46:41.139895 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:41.139912 | orchestrator | 2026-02-27 00:46:41.139929 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-27 00:46:41.139947 | orchestrator | Friday 27 February 2026 00:46:40 +0000 (0:00:00.136) 0:00:45.306 ******* 2026-02-27 00:46:41.139964 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:41.139983 | orchestrator | 2026-02-27 00:46:41.140001 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-27 00:46:41.140020 | orchestrator | Friday 27 February 2026 00:46:40 +0000 (0:00:00.114) 0:00:45.421 ******* 2026-02-27 00:46:41.140039 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:41.140057 | orchestrator | 2026-02-27 00:46:41.140076 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-27 00:46:41.140094 | orchestrator | Friday 27 February 2026 00:46:40 +0000 (0:00:00.127) 0:00:45.548 ******* 2026-02-27 00:46:41.140113 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:41.140132 | orchestrator | 2026-02-27 00:46:41.140164 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-27 00:46:46.282181 | orchestrator | Friday 27 February 2026 00:46:41 +0000 (0:00:00.137) 0:00:45.685 ******* 2026-02-27 00:46:46.282254 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:46.282262 | orchestrator | 2026-02-27 00:46:46.282269 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-27 00:46:46.282276 | orchestrator | Friday 27 February 2026 00:46:41 +0000 (0:00:00.379) 0:00:46.065 ******* 2026-02-27 00:46:46.282282 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:46.282287 | orchestrator | 2026-02-27 00:46:46.282293 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-27 00:46:46.282299 | orchestrator | Friday 27 February 2026 00:46:41 +0000 (0:00:00.166) 0:00:46.232 ******* 2026-02-27 00:46:46.282306 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:46.282311 | orchestrator | 2026-02-27 00:46:46.282317 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-27 00:46:46.282323 | orchestrator | Friday 27 February 2026 00:46:41 +0000 (0:00:00.144) 0:00:46.377 ******* 2026-02-27 00:46:46.282329 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:46.282334 | orchestrator | 2026-02-27 00:46:46.282339 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-27 00:46:46.282346 | orchestrator | Friday 27 February 2026 00:46:41 +0000 (0:00:00.143) 0:00:46.521 ******* 2026-02-27 00:46:46.282351 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:46.282357 | orchestrator | 2026-02-27 00:46:46.282363 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-27 00:46:46.282370 | orchestrator | Friday 27 February 2026 00:46:42 +0000 (0:00:00.159) 0:00:46.681 ******* 2026-02-27 00:46:46.282376 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:46.282382 | orchestrator | 2026-02-27 00:46:46.282388 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-27 00:46:46.282392 | orchestrator | Friday 27 February 2026 00:46:42 +0000 (0:00:00.146) 0:00:46.827 ******* 2026-02-27 00:46:46.282396 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:46.282399 | orchestrator | 2026-02-27 00:46:46.282403 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-27 00:46:46.282407 | orchestrator | Friday 27 February 2026 00:46:42 +0000 (0:00:00.150) 0:00:46.978 ******* 2026-02-27 00:46:46.282411 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:46.282414 | orchestrator | 2026-02-27 00:46:46.282418 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-27 00:46:46.282422 | orchestrator | Friday 27 February 2026 00:46:42 +0000 (0:00:00.163) 0:00:47.142 ******* 2026-02-27 00:46:46.282425 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:46.282429 | orchestrator | 2026-02-27 00:46:46.282434 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-27 00:46:46.282441 | orchestrator | Friday 27 February 2026 00:46:42 +0000 (0:00:00.144) 0:00:47.286 ******* 2026-02-27 00:46:46.282448 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:46.282453 | orchestrator | 2026-02-27 00:46:46.282457 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-27 00:46:46.282464 | orchestrator | Friday 27 February 2026 00:46:42 +0000 (0:00:00.141) 0:00:47.427 ******* 2026-02-27 00:46:46.282471 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3', 'data_vg': 'ceph-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3'})  2026-02-27 00:46:46.282480 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-91c1f24e-fd77-555b-b1fb-5152ae0ce974', 'data_vg': 'ceph-91c1f24e-fd77-555b-b1fb-5152ae0ce974'})  2026-02-27 00:46:46.282485 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:46.282489 | orchestrator | 2026-02-27 00:46:46.282492 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-27 00:46:46.282496 | orchestrator | Friday 27 February 2026 00:46:43 +0000 (0:00:00.164) 0:00:47.592 ******* 2026-02-27 00:46:46.282500 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3', 'data_vg': 'ceph-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3'})  2026-02-27 00:46:46.282508 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-91c1f24e-fd77-555b-b1fb-5152ae0ce974', 'data_vg': 'ceph-91c1f24e-fd77-555b-b1fb-5152ae0ce974'})  2026-02-27 00:46:46.282512 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:46.282516 | orchestrator | 2026-02-27 00:46:46.282535 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-27 00:46:46.282541 | orchestrator | Friday 27 February 2026 00:46:43 +0000 (0:00:00.169) 0:00:47.761 ******* 2026-02-27 00:46:46.282548 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3', 'data_vg': 'ceph-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3'})  2026-02-27 00:46:46.282555 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-91c1f24e-fd77-555b-b1fb-5152ae0ce974', 'data_vg': 'ceph-91c1f24e-fd77-555b-b1fb-5152ae0ce974'})  2026-02-27 00:46:46.282581 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:46.282587 | orchestrator | 2026-02-27 00:46:46.282594 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-27 00:46:46.282601 | orchestrator | Friday 27 February 2026 00:46:43 +0000 (0:00:00.409) 0:00:48.171 ******* 2026-02-27 00:46:46.282605 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3', 'data_vg': 'ceph-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3'})  2026-02-27 00:46:46.282609 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-91c1f24e-fd77-555b-b1fb-5152ae0ce974', 'data_vg': 'ceph-91c1f24e-fd77-555b-b1fb-5152ae0ce974'})  2026-02-27 00:46:46.282613 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:46.282616 | orchestrator | 2026-02-27 00:46:46.282631 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-27 00:46:46.282635 | orchestrator | Friday 27 February 2026 00:46:43 +0000 (0:00:00.171) 0:00:48.343 ******* 2026-02-27 00:46:46.282638 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3', 'data_vg': 'ceph-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3'})  2026-02-27 00:46:46.282642 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-91c1f24e-fd77-555b-b1fb-5152ae0ce974', 'data_vg': 'ceph-91c1f24e-fd77-555b-b1fb-5152ae0ce974'})  2026-02-27 00:46:46.282646 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:46.282650 | orchestrator | 2026-02-27 00:46:46.282654 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-27 00:46:46.282657 | orchestrator | Friday 27 February 2026 00:46:43 +0000 (0:00:00.169) 0:00:48.512 ******* 2026-02-27 00:46:46.282661 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3', 'data_vg': 'ceph-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3'})  2026-02-27 00:46:46.282665 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-91c1f24e-fd77-555b-b1fb-5152ae0ce974', 'data_vg': 'ceph-91c1f24e-fd77-555b-b1fb-5152ae0ce974'})  2026-02-27 00:46:46.282669 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:46.282673 | orchestrator | 2026-02-27 00:46:46.282677 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-27 00:46:46.282681 | orchestrator | Friday 27 February 2026 00:46:44 +0000 (0:00:00.186) 0:00:48.699 ******* 2026-02-27 00:46:46.282715 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3', 'data_vg': 'ceph-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3'})  2026-02-27 00:46:46.282720 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-91c1f24e-fd77-555b-b1fb-5152ae0ce974', 'data_vg': 'ceph-91c1f24e-fd77-555b-b1fb-5152ae0ce974'})  2026-02-27 00:46:46.282724 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:46.282728 | orchestrator | 2026-02-27 00:46:46.282732 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-27 00:46:46.282735 | orchestrator | Friday 27 February 2026 00:46:44 +0000 (0:00:00.166) 0:00:48.865 ******* 2026-02-27 00:46:46.282739 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3', 'data_vg': 'ceph-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3'})  2026-02-27 00:46:46.282747 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-91c1f24e-fd77-555b-b1fb-5152ae0ce974', 'data_vg': 'ceph-91c1f24e-fd77-555b-b1fb-5152ae0ce974'})  2026-02-27 00:46:46.282753 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:46.282757 | orchestrator | 2026-02-27 00:46:46.282760 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-27 00:46:46.282764 | orchestrator | Friday 27 February 2026 00:46:44 +0000 (0:00:00.167) 0:00:49.032 ******* 2026-02-27 00:46:46.282769 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:46:46.282773 | orchestrator | 2026-02-27 00:46:46.282777 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-27 00:46:46.282782 | orchestrator | Friday 27 February 2026 00:46:44 +0000 (0:00:00.512) 0:00:49.544 ******* 2026-02-27 00:46:46.282786 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:46:46.282792 | orchestrator | 2026-02-27 00:46:46.282798 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-27 00:46:46.282805 | orchestrator | Friday 27 February 2026 00:46:45 +0000 (0:00:00.607) 0:00:50.152 ******* 2026-02-27 00:46:46.282812 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:46:46.282817 | orchestrator | 2026-02-27 00:46:46.282823 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-27 00:46:46.282830 | orchestrator | Friday 27 February 2026 00:46:45 +0000 (0:00:00.159) 0:00:50.311 ******* 2026-02-27 00:46:46.282837 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-91c1f24e-fd77-555b-b1fb-5152ae0ce974', 'vg_name': 'ceph-91c1f24e-fd77-555b-b1fb-5152ae0ce974'}) 2026-02-27 00:46:46.282845 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3', 'vg_name': 'ceph-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3'}) 2026-02-27 00:46:46.282851 | orchestrator | 2026-02-27 00:46:46.282857 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-27 00:46:46.282864 | orchestrator | Friday 27 February 2026 00:46:45 +0000 (0:00:00.171) 0:00:50.482 ******* 2026-02-27 00:46:46.282871 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3', 'data_vg': 'ceph-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3'})  2026-02-27 00:46:46.282878 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-91c1f24e-fd77-555b-b1fb-5152ae0ce974', 'data_vg': 'ceph-91c1f24e-fd77-555b-b1fb-5152ae0ce974'})  2026-02-27 00:46:46.282884 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:46.282891 | orchestrator | 2026-02-27 00:46:46.282897 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-27 00:46:46.282904 | orchestrator | Friday 27 February 2026 00:46:46 +0000 (0:00:00.178) 0:00:50.661 ******* 2026-02-27 00:46:46.282911 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3', 'data_vg': 'ceph-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3'})  2026-02-27 00:46:46.282919 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-91c1f24e-fd77-555b-b1fb-5152ae0ce974', 'data_vg': 'ceph-91c1f24e-fd77-555b-b1fb-5152ae0ce974'})  2026-02-27 00:46:52.466343 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:52.466460 | orchestrator | 2026-02-27 00:46:52.466477 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-27 00:46:52.466489 | orchestrator | Friday 27 February 2026 00:46:46 +0000 (0:00:00.168) 0:00:50.829 ******* 2026-02-27 00:46:52.466501 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3', 'data_vg': 'ceph-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3'})  2026-02-27 00:46:52.466587 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-91c1f24e-fd77-555b-b1fb-5152ae0ce974', 'data_vg': 'ceph-91c1f24e-fd77-555b-b1fb-5152ae0ce974'})  2026-02-27 00:46:52.466610 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:46:52.466623 | orchestrator | 2026-02-27 00:46:52.466634 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-27 00:46:52.466681 | orchestrator | Friday 27 February 2026 00:46:46 +0000 (0:00:00.150) 0:00:50.980 ******* 2026-02-27 00:46:52.466693 | orchestrator | ok: [testbed-node-4] => { 2026-02-27 00:46:52.466704 | orchestrator |  "lvm_report": { 2026-02-27 00:46:52.466716 | orchestrator |  "lv": [ 2026-02-27 00:46:52.466726 | orchestrator |  { 2026-02-27 00:46:52.466737 | orchestrator |  "lv_name": "osd-block-91c1f24e-fd77-555b-b1fb-5152ae0ce974", 2026-02-27 00:46:52.466749 | orchestrator |  "vg_name": "ceph-91c1f24e-fd77-555b-b1fb-5152ae0ce974" 2026-02-27 00:46:52.466760 | orchestrator |  }, 2026-02-27 00:46:52.466770 | orchestrator |  { 2026-02-27 00:46:52.466781 | orchestrator |  "lv_name": "osd-block-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3", 2026-02-27 00:46:52.466792 | orchestrator |  "vg_name": "ceph-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3" 2026-02-27 00:46:52.466802 | orchestrator |  } 2026-02-27 00:46:52.466813 | orchestrator |  ], 2026-02-27 00:46:52.466823 | orchestrator |  "pv": [ 2026-02-27 00:46:52.466833 | orchestrator |  { 2026-02-27 00:46:52.466844 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-27 00:46:52.466854 | orchestrator |  "vg_name": "ceph-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3" 2026-02-27 00:46:52.466865 | orchestrator |  }, 2026-02-27 00:46:52.466875 | orchestrator |  { 2026-02-27 00:46:52.466887 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-27 00:46:52.466900 | orchestrator |  "vg_name": "ceph-91c1f24e-fd77-555b-b1fb-5152ae0ce974" 2026-02-27 00:46:52.466912 | orchestrator |  } 2026-02-27 00:46:52.466925 | orchestrator |  ] 2026-02-27 00:46:52.466936 | orchestrator |  } 2026-02-27 00:46:52.466948 | orchestrator | } 2026-02-27 00:46:52.466960 | orchestrator | 2026-02-27 00:46:52.466972 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-27 00:46:52.466984 | orchestrator | 2026-02-27 00:46:52.466996 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-27 00:46:52.467009 | orchestrator | Friday 27 February 2026 00:46:46 +0000 (0:00:00.458) 0:00:51.438 ******* 2026-02-27 00:46:52.467035 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-27 00:46:52.467048 | orchestrator | 2026-02-27 00:46:52.467061 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-27 00:46:52.467075 | orchestrator | Friday 27 February 2026 00:46:47 +0000 (0:00:00.240) 0:00:51.679 ******* 2026-02-27 00:46:52.467087 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:46:52.467100 | orchestrator | 2026-02-27 00:46:52.467112 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:46:52.467125 | orchestrator | Friday 27 February 2026 00:46:47 +0000 (0:00:00.223) 0:00:51.902 ******* 2026-02-27 00:46:52.467137 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-02-27 00:46:52.467149 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-02-27 00:46:52.467161 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-02-27 00:46:52.467173 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-02-27 00:46:52.467186 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-02-27 00:46:52.467197 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-02-27 00:46:52.467210 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-02-27 00:46:52.467223 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-02-27 00:46:52.467234 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-02-27 00:46:52.467245 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-02-27 00:46:52.467264 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-02-27 00:46:52.467275 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-02-27 00:46:52.467286 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-02-27 00:46:52.467297 | orchestrator | 2026-02-27 00:46:52.467307 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:46:52.467322 | orchestrator | Friday 27 February 2026 00:46:47 +0000 (0:00:00.425) 0:00:52.328 ******* 2026-02-27 00:46:52.467333 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:46:52.467344 | orchestrator | 2026-02-27 00:46:52.467355 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:46:52.467365 | orchestrator | Friday 27 February 2026 00:46:47 +0000 (0:00:00.193) 0:00:52.521 ******* 2026-02-27 00:46:52.467376 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:46:52.467387 | orchestrator | 2026-02-27 00:46:52.467398 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:46:52.467427 | orchestrator | Friday 27 February 2026 00:46:48 +0000 (0:00:00.195) 0:00:52.717 ******* 2026-02-27 00:46:52.467439 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:46:52.467449 | orchestrator | 2026-02-27 00:46:52.467460 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:46:52.467471 | orchestrator | Friday 27 February 2026 00:46:48 +0000 (0:00:00.195) 0:00:52.912 ******* 2026-02-27 00:46:52.467481 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:46:52.467492 | orchestrator | 2026-02-27 00:46:52.467502 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:46:52.467540 | orchestrator | Friday 27 February 2026 00:46:48 +0000 (0:00:00.203) 0:00:53.116 ******* 2026-02-27 00:46:52.467553 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:46:52.467564 | orchestrator | 2026-02-27 00:46:52.467575 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:46:52.467586 | orchestrator | Friday 27 February 2026 00:46:49 +0000 (0:00:00.626) 0:00:53.742 ******* 2026-02-27 00:46:52.467597 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:46:52.467607 | orchestrator | 2026-02-27 00:46:52.467618 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:46:52.467629 | orchestrator | Friday 27 February 2026 00:46:49 +0000 (0:00:00.243) 0:00:53.986 ******* 2026-02-27 00:46:52.467640 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:46:52.467650 | orchestrator | 2026-02-27 00:46:52.467661 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:46:52.467672 | orchestrator | Friday 27 February 2026 00:46:49 +0000 (0:00:00.240) 0:00:54.227 ******* 2026-02-27 00:46:52.467683 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:46:52.467694 | orchestrator | 2026-02-27 00:46:52.467705 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:46:52.467715 | orchestrator | Friday 27 February 2026 00:46:49 +0000 (0:00:00.209) 0:00:54.437 ******* 2026-02-27 00:46:52.467726 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_7b66f543-9fce-4c0f-ad03-37f043f64686) 2026-02-27 00:46:52.467738 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_7b66f543-9fce-4c0f-ad03-37f043f64686) 2026-02-27 00:46:52.467749 | orchestrator | 2026-02-27 00:46:52.467760 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:46:52.467770 | orchestrator | Friday 27 February 2026 00:46:50 +0000 (0:00:00.427) 0:00:54.865 ******* 2026-02-27 00:46:52.467781 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_7eee5dc0-08e1-454c-92c3-6b2c2994eeca) 2026-02-27 00:46:52.467792 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_7eee5dc0-08e1-454c-92c3-6b2c2994eeca) 2026-02-27 00:46:52.467802 | orchestrator | 2026-02-27 00:46:52.467813 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:46:52.467836 | orchestrator | Friday 27 February 2026 00:46:50 +0000 (0:00:00.439) 0:00:55.304 ******* 2026-02-27 00:46:52.467847 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_684e370a-eec5-4526-b882-46c5ae49497d) 2026-02-27 00:46:52.467858 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_684e370a-eec5-4526-b882-46c5ae49497d) 2026-02-27 00:46:52.467869 | orchestrator | 2026-02-27 00:46:52.467879 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:46:52.467890 | orchestrator | Friday 27 February 2026 00:46:51 +0000 (0:00:00.461) 0:00:55.765 ******* 2026-02-27 00:46:52.467901 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_109976ce-0a0b-48dc-bf94-df447195f5f3) 2026-02-27 00:46:52.467912 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_109976ce-0a0b-48dc-bf94-df447195f5f3) 2026-02-27 00:46:52.467923 | orchestrator | 2026-02-27 00:46:52.467933 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-27 00:46:52.467944 | orchestrator | Friday 27 February 2026 00:46:51 +0000 (0:00:00.440) 0:00:56.206 ******* 2026-02-27 00:46:52.467955 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-27 00:46:52.467966 | orchestrator | 2026-02-27 00:46:52.467976 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:46:52.467987 | orchestrator | Friday 27 February 2026 00:46:51 +0000 (0:00:00.339) 0:00:56.545 ******* 2026-02-27 00:46:52.467998 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-02-27 00:46:52.468009 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-02-27 00:46:52.468019 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-02-27 00:46:52.468030 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-02-27 00:46:52.468040 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-02-27 00:46:52.468051 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-02-27 00:46:52.468062 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-02-27 00:46:52.468073 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-02-27 00:46:52.468083 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-02-27 00:46:52.468094 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-02-27 00:46:52.468105 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-02-27 00:46:52.468123 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-02-27 00:47:01.807056 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-02-27 00:47:01.807909 | orchestrator | 2026-02-27 00:47:01.807933 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:47:01.807941 | orchestrator | Friday 27 February 2026 00:46:52 +0000 (0:00:00.460) 0:00:57.005 ******* 2026-02-27 00:47:01.807948 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:01.807955 | orchestrator | 2026-02-27 00:47:01.807962 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:47:01.807968 | orchestrator | Friday 27 February 2026 00:46:52 +0000 (0:00:00.213) 0:00:57.219 ******* 2026-02-27 00:47:01.807975 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:01.807980 | orchestrator | 2026-02-27 00:47:01.807985 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:47:01.807991 | orchestrator | Friday 27 February 2026 00:46:53 +0000 (0:00:00.706) 0:00:57.926 ******* 2026-02-27 00:47:01.807996 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:01.808018 | orchestrator | 2026-02-27 00:47:01.808023 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:47:01.808028 | orchestrator | Friday 27 February 2026 00:46:53 +0000 (0:00:00.224) 0:00:58.150 ******* 2026-02-27 00:47:01.808033 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:01.808038 | orchestrator | 2026-02-27 00:47:01.808043 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:47:01.808048 | orchestrator | Friday 27 February 2026 00:46:53 +0000 (0:00:00.208) 0:00:58.358 ******* 2026-02-27 00:47:01.808053 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:01.808058 | orchestrator | 2026-02-27 00:47:01.808063 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:47:01.808068 | orchestrator | Friday 27 February 2026 00:46:54 +0000 (0:00:00.236) 0:00:58.595 ******* 2026-02-27 00:47:01.808073 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:01.808078 | orchestrator | 2026-02-27 00:47:01.808083 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:47:01.808088 | orchestrator | Friday 27 February 2026 00:46:54 +0000 (0:00:00.202) 0:00:58.797 ******* 2026-02-27 00:47:01.808093 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:01.808098 | orchestrator | 2026-02-27 00:47:01.808103 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:47:01.808109 | orchestrator | Friday 27 February 2026 00:46:54 +0000 (0:00:00.201) 0:00:58.999 ******* 2026-02-27 00:47:01.808114 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:01.808119 | orchestrator | 2026-02-27 00:47:01.808124 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:47:01.808129 | orchestrator | Friday 27 February 2026 00:46:54 +0000 (0:00:00.229) 0:00:59.228 ******* 2026-02-27 00:47:01.808134 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-02-27 00:47:01.808140 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-02-27 00:47:01.808145 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-02-27 00:47:01.808150 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-02-27 00:47:01.808155 | orchestrator | 2026-02-27 00:47:01.808161 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:47:01.808166 | orchestrator | Friday 27 February 2026 00:46:55 +0000 (0:00:00.663) 0:00:59.892 ******* 2026-02-27 00:47:01.808171 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:01.808176 | orchestrator | 2026-02-27 00:47:01.808181 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:47:01.808186 | orchestrator | Friday 27 February 2026 00:46:55 +0000 (0:00:00.209) 0:01:00.101 ******* 2026-02-27 00:47:01.808191 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:01.808196 | orchestrator | 2026-02-27 00:47:01.808201 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:47:01.808206 | orchestrator | Friday 27 February 2026 00:46:55 +0000 (0:00:00.211) 0:01:00.312 ******* 2026-02-27 00:47:01.808211 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:01.808216 | orchestrator | 2026-02-27 00:47:01.808221 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-27 00:47:01.808226 | orchestrator | Friday 27 February 2026 00:46:55 +0000 (0:00:00.203) 0:01:00.516 ******* 2026-02-27 00:47:01.808231 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:01.808236 | orchestrator | 2026-02-27 00:47:01.808241 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-27 00:47:01.808246 | orchestrator | Friday 27 February 2026 00:46:56 +0000 (0:00:00.204) 0:01:00.720 ******* 2026-02-27 00:47:01.808251 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:01.808256 | orchestrator | 2026-02-27 00:47:01.808261 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-27 00:47:01.808266 | orchestrator | Friday 27 February 2026 00:46:56 +0000 (0:00:00.329) 0:01:01.049 ******* 2026-02-27 00:47:01.808271 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '5630d52f-55a8-52f3-8c7d-90d730eab2c2'}}) 2026-02-27 00:47:01.808281 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'e90026b5-6780-5a31-9cea-c7916e7559fe'}}) 2026-02-27 00:47:01.808286 | orchestrator | 2026-02-27 00:47:01.808291 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-27 00:47:01.808296 | orchestrator | Friday 27 February 2026 00:46:56 +0000 (0:00:00.213) 0:01:01.263 ******* 2026-02-27 00:47:01.808302 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5630d52f-55a8-52f3-8c7d-90d730eab2c2', 'data_vg': 'ceph-5630d52f-55a8-52f3-8c7d-90d730eab2c2'}) 2026-02-27 00:47:01.808323 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-e90026b5-6780-5a31-9cea-c7916e7559fe', 'data_vg': 'ceph-e90026b5-6780-5a31-9cea-c7916e7559fe'}) 2026-02-27 00:47:01.808328 | orchestrator | 2026-02-27 00:47:01.808333 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-27 00:47:01.808351 | orchestrator | Friday 27 February 2026 00:46:58 +0000 (0:00:01.915) 0:01:03.178 ******* 2026-02-27 00:47:01.808357 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5630d52f-55a8-52f3-8c7d-90d730eab2c2', 'data_vg': 'ceph-5630d52f-55a8-52f3-8c7d-90d730eab2c2'})  2026-02-27 00:47:01.808363 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e90026b5-6780-5a31-9cea-c7916e7559fe', 'data_vg': 'ceph-e90026b5-6780-5a31-9cea-c7916e7559fe'})  2026-02-27 00:47:01.808368 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:01.808373 | orchestrator | 2026-02-27 00:47:01.808378 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-27 00:47:01.808383 | orchestrator | Friday 27 February 2026 00:46:58 +0000 (0:00:00.176) 0:01:03.354 ******* 2026-02-27 00:47:01.808389 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5630d52f-55a8-52f3-8c7d-90d730eab2c2', 'data_vg': 'ceph-5630d52f-55a8-52f3-8c7d-90d730eab2c2'}) 2026-02-27 00:47:01.808394 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-e90026b5-6780-5a31-9cea-c7916e7559fe', 'data_vg': 'ceph-e90026b5-6780-5a31-9cea-c7916e7559fe'}) 2026-02-27 00:47:01.808399 | orchestrator | 2026-02-27 00:47:01.808404 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-27 00:47:01.808409 | orchestrator | Friday 27 February 2026 00:47:00 +0000 (0:00:01.315) 0:01:04.670 ******* 2026-02-27 00:47:01.808414 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5630d52f-55a8-52f3-8c7d-90d730eab2c2', 'data_vg': 'ceph-5630d52f-55a8-52f3-8c7d-90d730eab2c2'})  2026-02-27 00:47:01.808419 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e90026b5-6780-5a31-9cea-c7916e7559fe', 'data_vg': 'ceph-e90026b5-6780-5a31-9cea-c7916e7559fe'})  2026-02-27 00:47:01.808424 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:01.808429 | orchestrator | 2026-02-27 00:47:01.808434 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-27 00:47:01.808439 | orchestrator | Friday 27 February 2026 00:47:00 +0000 (0:00:00.174) 0:01:04.844 ******* 2026-02-27 00:47:01.808444 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:01.808449 | orchestrator | 2026-02-27 00:47:01.808454 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-27 00:47:01.808459 | orchestrator | Friday 27 February 2026 00:47:00 +0000 (0:00:00.153) 0:01:04.997 ******* 2026-02-27 00:47:01.808464 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5630d52f-55a8-52f3-8c7d-90d730eab2c2', 'data_vg': 'ceph-5630d52f-55a8-52f3-8c7d-90d730eab2c2'})  2026-02-27 00:47:01.808473 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e90026b5-6780-5a31-9cea-c7916e7559fe', 'data_vg': 'ceph-e90026b5-6780-5a31-9cea-c7916e7559fe'})  2026-02-27 00:47:01.808478 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:01.808483 | orchestrator | 2026-02-27 00:47:01.808488 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-27 00:47:01.808493 | orchestrator | Friday 27 February 2026 00:47:00 +0000 (0:00:00.201) 0:01:05.199 ******* 2026-02-27 00:47:01.808502 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:01.808555 | orchestrator | 2026-02-27 00:47:01.808564 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-27 00:47:01.808573 | orchestrator | Friday 27 February 2026 00:47:00 +0000 (0:00:00.162) 0:01:05.362 ******* 2026-02-27 00:47:01.808579 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5630d52f-55a8-52f3-8c7d-90d730eab2c2', 'data_vg': 'ceph-5630d52f-55a8-52f3-8c7d-90d730eab2c2'})  2026-02-27 00:47:01.808584 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e90026b5-6780-5a31-9cea-c7916e7559fe', 'data_vg': 'ceph-e90026b5-6780-5a31-9cea-c7916e7559fe'})  2026-02-27 00:47:01.808589 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:01.808594 | orchestrator | 2026-02-27 00:47:01.808599 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-27 00:47:01.808604 | orchestrator | Friday 27 February 2026 00:47:00 +0000 (0:00:00.157) 0:01:05.519 ******* 2026-02-27 00:47:01.808609 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:01.808614 | orchestrator | 2026-02-27 00:47:01.808619 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-27 00:47:01.808624 | orchestrator | Friday 27 February 2026 00:47:01 +0000 (0:00:00.143) 0:01:05.663 ******* 2026-02-27 00:47:01.808629 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5630d52f-55a8-52f3-8c7d-90d730eab2c2', 'data_vg': 'ceph-5630d52f-55a8-52f3-8c7d-90d730eab2c2'})  2026-02-27 00:47:01.808634 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e90026b5-6780-5a31-9cea-c7916e7559fe', 'data_vg': 'ceph-e90026b5-6780-5a31-9cea-c7916e7559fe'})  2026-02-27 00:47:01.808639 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:01.808644 | orchestrator | 2026-02-27 00:47:01.808649 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-27 00:47:01.808654 | orchestrator | Friday 27 February 2026 00:47:01 +0000 (0:00:00.163) 0:01:05.826 ******* 2026-02-27 00:47:01.808659 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:47:01.808664 | orchestrator | 2026-02-27 00:47:01.808669 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-27 00:47:01.808674 | orchestrator | Friday 27 February 2026 00:47:01 +0000 (0:00:00.365) 0:01:06.192 ******* 2026-02-27 00:47:01.808684 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5630d52f-55a8-52f3-8c7d-90d730eab2c2', 'data_vg': 'ceph-5630d52f-55a8-52f3-8c7d-90d730eab2c2'})  2026-02-27 00:47:08.334739 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e90026b5-6780-5a31-9cea-c7916e7559fe', 'data_vg': 'ceph-e90026b5-6780-5a31-9cea-c7916e7559fe'})  2026-02-27 00:47:08.334841 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:08.334856 | orchestrator | 2026-02-27 00:47:08.334866 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-27 00:47:08.334877 | orchestrator | Friday 27 February 2026 00:47:01 +0000 (0:00:00.162) 0:01:06.354 ******* 2026-02-27 00:47:08.334885 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5630d52f-55a8-52f3-8c7d-90d730eab2c2', 'data_vg': 'ceph-5630d52f-55a8-52f3-8c7d-90d730eab2c2'})  2026-02-27 00:47:08.334894 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e90026b5-6780-5a31-9cea-c7916e7559fe', 'data_vg': 'ceph-e90026b5-6780-5a31-9cea-c7916e7559fe'})  2026-02-27 00:47:08.334903 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:08.334911 | orchestrator | 2026-02-27 00:47:08.334917 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-27 00:47:08.334922 | orchestrator | Friday 27 February 2026 00:47:01 +0000 (0:00:00.180) 0:01:06.535 ******* 2026-02-27 00:47:08.334927 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5630d52f-55a8-52f3-8c7d-90d730eab2c2', 'data_vg': 'ceph-5630d52f-55a8-52f3-8c7d-90d730eab2c2'})  2026-02-27 00:47:08.334931 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e90026b5-6780-5a31-9cea-c7916e7559fe', 'data_vg': 'ceph-e90026b5-6780-5a31-9cea-c7916e7559fe'})  2026-02-27 00:47:08.334956 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:08.334961 | orchestrator | 2026-02-27 00:47:08.334966 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-27 00:47:08.334971 | orchestrator | Friday 27 February 2026 00:47:02 +0000 (0:00:00.162) 0:01:06.698 ******* 2026-02-27 00:47:08.334976 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:08.334980 | orchestrator | 2026-02-27 00:47:08.334985 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-27 00:47:08.334990 | orchestrator | Friday 27 February 2026 00:47:02 +0000 (0:00:00.153) 0:01:06.851 ******* 2026-02-27 00:47:08.334995 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:08.335000 | orchestrator | 2026-02-27 00:47:08.335004 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-27 00:47:08.335009 | orchestrator | Friday 27 February 2026 00:47:02 +0000 (0:00:00.142) 0:01:06.994 ******* 2026-02-27 00:47:08.335014 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:08.335020 | orchestrator | 2026-02-27 00:47:08.335040 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-27 00:47:08.335049 | orchestrator | Friday 27 February 2026 00:47:02 +0000 (0:00:00.148) 0:01:07.142 ******* 2026-02-27 00:47:08.335057 | orchestrator | ok: [testbed-node-5] => { 2026-02-27 00:47:08.335066 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-27 00:47:08.335074 | orchestrator | } 2026-02-27 00:47:08.335082 | orchestrator | 2026-02-27 00:47:08.335089 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-27 00:47:08.335097 | orchestrator | Friday 27 February 2026 00:47:02 +0000 (0:00:00.160) 0:01:07.303 ******* 2026-02-27 00:47:08.335104 | orchestrator | ok: [testbed-node-5] => { 2026-02-27 00:47:08.335112 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-27 00:47:08.335119 | orchestrator | } 2026-02-27 00:47:08.335127 | orchestrator | 2026-02-27 00:47:08.335136 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-27 00:47:08.335144 | orchestrator | Friday 27 February 2026 00:47:02 +0000 (0:00:00.147) 0:01:07.450 ******* 2026-02-27 00:47:08.335152 | orchestrator | ok: [testbed-node-5] => { 2026-02-27 00:47:08.335160 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-27 00:47:08.335168 | orchestrator | } 2026-02-27 00:47:08.335176 | orchestrator | 2026-02-27 00:47:08.335184 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-27 00:47:08.335192 | orchestrator | Friday 27 February 2026 00:47:03 +0000 (0:00:00.152) 0:01:07.603 ******* 2026-02-27 00:47:08.335200 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:47:08.335208 | orchestrator | 2026-02-27 00:47:08.335216 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-27 00:47:08.335224 | orchestrator | Friday 27 February 2026 00:47:03 +0000 (0:00:00.542) 0:01:08.145 ******* 2026-02-27 00:47:08.335232 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:47:08.335240 | orchestrator | 2026-02-27 00:47:08.335249 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-27 00:47:08.335257 | orchestrator | Friday 27 February 2026 00:47:04 +0000 (0:00:00.547) 0:01:08.692 ******* 2026-02-27 00:47:08.335265 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:47:08.335272 | orchestrator | 2026-02-27 00:47:08.335280 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-27 00:47:08.335289 | orchestrator | Friday 27 February 2026 00:47:04 +0000 (0:00:00.756) 0:01:09.448 ******* 2026-02-27 00:47:08.335297 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:47:08.335305 | orchestrator | 2026-02-27 00:47:08.335313 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-27 00:47:08.335322 | orchestrator | Friday 27 February 2026 00:47:05 +0000 (0:00:00.153) 0:01:09.602 ******* 2026-02-27 00:47:08.335330 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:08.335339 | orchestrator | 2026-02-27 00:47:08.335347 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-27 00:47:08.335363 | orchestrator | Friday 27 February 2026 00:47:05 +0000 (0:00:00.160) 0:01:09.763 ******* 2026-02-27 00:47:08.335371 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:08.335379 | orchestrator | 2026-02-27 00:47:08.335388 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-27 00:47:08.335395 | orchestrator | Friday 27 February 2026 00:47:05 +0000 (0:00:00.111) 0:01:09.874 ******* 2026-02-27 00:47:08.335400 | orchestrator | ok: [testbed-node-5] => { 2026-02-27 00:47:08.335406 | orchestrator |  "vgs_report": { 2026-02-27 00:47:08.335412 | orchestrator |  "vg": [] 2026-02-27 00:47:08.335431 | orchestrator |  } 2026-02-27 00:47:08.335438 | orchestrator | } 2026-02-27 00:47:08.335443 | orchestrator | 2026-02-27 00:47:08.335449 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-27 00:47:08.335454 | orchestrator | Friday 27 February 2026 00:47:05 +0000 (0:00:00.176) 0:01:10.051 ******* 2026-02-27 00:47:08.335460 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:08.335466 | orchestrator | 2026-02-27 00:47:08.335471 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-27 00:47:08.335477 | orchestrator | Friday 27 February 2026 00:47:05 +0000 (0:00:00.140) 0:01:10.191 ******* 2026-02-27 00:47:08.335482 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:08.335488 | orchestrator | 2026-02-27 00:47:08.335494 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-27 00:47:08.335536 | orchestrator | Friday 27 February 2026 00:47:05 +0000 (0:00:00.149) 0:01:10.341 ******* 2026-02-27 00:47:08.335542 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:08.335548 | orchestrator | 2026-02-27 00:47:08.335554 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-27 00:47:08.335560 | orchestrator | Friday 27 February 2026 00:47:05 +0000 (0:00:00.141) 0:01:10.482 ******* 2026-02-27 00:47:08.335565 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:08.335571 | orchestrator | 2026-02-27 00:47:08.335576 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-27 00:47:08.335582 | orchestrator | Friday 27 February 2026 00:47:06 +0000 (0:00:00.184) 0:01:10.666 ******* 2026-02-27 00:47:08.335588 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:08.335594 | orchestrator | 2026-02-27 00:47:08.335600 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-27 00:47:08.335607 | orchestrator | Friday 27 February 2026 00:47:06 +0000 (0:00:00.148) 0:01:10.814 ******* 2026-02-27 00:47:08.335615 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:08.335622 | orchestrator | 2026-02-27 00:47:08.335630 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-27 00:47:08.335638 | orchestrator | Friday 27 February 2026 00:47:06 +0000 (0:00:00.143) 0:01:10.958 ******* 2026-02-27 00:47:08.335646 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:08.335653 | orchestrator | 2026-02-27 00:47:08.335662 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-27 00:47:08.335670 | orchestrator | Friday 27 February 2026 00:47:06 +0000 (0:00:00.146) 0:01:11.105 ******* 2026-02-27 00:47:08.335679 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:08.335686 | orchestrator | 2026-02-27 00:47:08.335693 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-27 00:47:08.335698 | orchestrator | Friday 27 February 2026 00:47:06 +0000 (0:00:00.358) 0:01:11.464 ******* 2026-02-27 00:47:08.335703 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:08.335708 | orchestrator | 2026-02-27 00:47:08.335717 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-27 00:47:08.335722 | orchestrator | Friday 27 February 2026 00:47:07 +0000 (0:00:00.143) 0:01:11.608 ******* 2026-02-27 00:47:08.335727 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:08.335732 | orchestrator | 2026-02-27 00:47:08.335736 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-27 00:47:08.335741 | orchestrator | Friday 27 February 2026 00:47:07 +0000 (0:00:00.153) 0:01:11.761 ******* 2026-02-27 00:47:08.335751 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:08.335756 | orchestrator | 2026-02-27 00:47:08.335760 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-27 00:47:08.335765 | orchestrator | Friday 27 February 2026 00:47:07 +0000 (0:00:00.142) 0:01:11.903 ******* 2026-02-27 00:47:08.335770 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:08.335775 | orchestrator | 2026-02-27 00:47:08.335780 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-27 00:47:08.335784 | orchestrator | Friday 27 February 2026 00:47:07 +0000 (0:00:00.158) 0:01:12.061 ******* 2026-02-27 00:47:08.335789 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:08.335794 | orchestrator | 2026-02-27 00:47:08.335799 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-27 00:47:08.335803 | orchestrator | Friday 27 February 2026 00:47:07 +0000 (0:00:00.149) 0:01:12.211 ******* 2026-02-27 00:47:08.335808 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:08.335813 | orchestrator | 2026-02-27 00:47:08.335818 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-27 00:47:08.335822 | orchestrator | Friday 27 February 2026 00:47:07 +0000 (0:00:00.138) 0:01:12.350 ******* 2026-02-27 00:47:08.335827 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5630d52f-55a8-52f3-8c7d-90d730eab2c2', 'data_vg': 'ceph-5630d52f-55a8-52f3-8c7d-90d730eab2c2'})  2026-02-27 00:47:08.335833 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e90026b5-6780-5a31-9cea-c7916e7559fe', 'data_vg': 'ceph-e90026b5-6780-5a31-9cea-c7916e7559fe'})  2026-02-27 00:47:08.335837 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:08.335842 | orchestrator | 2026-02-27 00:47:08.335847 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-27 00:47:08.335852 | orchestrator | Friday 27 February 2026 00:47:07 +0000 (0:00:00.182) 0:01:12.532 ******* 2026-02-27 00:47:08.335856 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5630d52f-55a8-52f3-8c7d-90d730eab2c2', 'data_vg': 'ceph-5630d52f-55a8-52f3-8c7d-90d730eab2c2'})  2026-02-27 00:47:08.335861 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e90026b5-6780-5a31-9cea-c7916e7559fe', 'data_vg': 'ceph-e90026b5-6780-5a31-9cea-c7916e7559fe'})  2026-02-27 00:47:08.335866 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:08.335871 | orchestrator | 2026-02-27 00:47:08.335876 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-27 00:47:08.335881 | orchestrator | Friday 27 February 2026 00:47:08 +0000 (0:00:00.188) 0:01:12.721 ******* 2026-02-27 00:47:08.335891 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5630d52f-55a8-52f3-8c7d-90d730eab2c2', 'data_vg': 'ceph-5630d52f-55a8-52f3-8c7d-90d730eab2c2'})  2026-02-27 00:47:11.504432 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e90026b5-6780-5a31-9cea-c7916e7559fe', 'data_vg': 'ceph-e90026b5-6780-5a31-9cea-c7916e7559fe'})  2026-02-27 00:47:11.504549 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:11.504560 | orchestrator | 2026-02-27 00:47:11.504565 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-27 00:47:11.504570 | orchestrator | Friday 27 February 2026 00:47:08 +0000 (0:00:00.162) 0:01:12.883 ******* 2026-02-27 00:47:11.504575 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5630d52f-55a8-52f3-8c7d-90d730eab2c2', 'data_vg': 'ceph-5630d52f-55a8-52f3-8c7d-90d730eab2c2'})  2026-02-27 00:47:11.504579 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e90026b5-6780-5a31-9cea-c7916e7559fe', 'data_vg': 'ceph-e90026b5-6780-5a31-9cea-c7916e7559fe'})  2026-02-27 00:47:11.504584 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:11.504588 | orchestrator | 2026-02-27 00:47:11.504592 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-27 00:47:11.504596 | orchestrator | Friday 27 February 2026 00:47:08 +0000 (0:00:00.161) 0:01:13.045 ******* 2026-02-27 00:47:11.504620 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5630d52f-55a8-52f3-8c7d-90d730eab2c2', 'data_vg': 'ceph-5630d52f-55a8-52f3-8c7d-90d730eab2c2'})  2026-02-27 00:47:11.504624 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e90026b5-6780-5a31-9cea-c7916e7559fe', 'data_vg': 'ceph-e90026b5-6780-5a31-9cea-c7916e7559fe'})  2026-02-27 00:47:11.504628 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:11.504631 | orchestrator | 2026-02-27 00:47:11.504635 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-27 00:47:11.504639 | orchestrator | Friday 27 February 2026 00:47:08 +0000 (0:00:00.165) 0:01:13.210 ******* 2026-02-27 00:47:11.504642 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5630d52f-55a8-52f3-8c7d-90d730eab2c2', 'data_vg': 'ceph-5630d52f-55a8-52f3-8c7d-90d730eab2c2'})  2026-02-27 00:47:11.504646 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e90026b5-6780-5a31-9cea-c7916e7559fe', 'data_vg': 'ceph-e90026b5-6780-5a31-9cea-c7916e7559fe'})  2026-02-27 00:47:11.504650 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:11.504654 | orchestrator | 2026-02-27 00:47:11.504658 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-27 00:47:11.504663 | orchestrator | Friday 27 February 2026 00:47:09 +0000 (0:00:00.383) 0:01:13.594 ******* 2026-02-27 00:47:11.504669 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5630d52f-55a8-52f3-8c7d-90d730eab2c2', 'data_vg': 'ceph-5630d52f-55a8-52f3-8c7d-90d730eab2c2'})  2026-02-27 00:47:11.504676 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e90026b5-6780-5a31-9cea-c7916e7559fe', 'data_vg': 'ceph-e90026b5-6780-5a31-9cea-c7916e7559fe'})  2026-02-27 00:47:11.504682 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:11.504687 | orchestrator | 2026-02-27 00:47:11.504692 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-27 00:47:11.504696 | orchestrator | Friday 27 February 2026 00:47:09 +0000 (0:00:00.170) 0:01:13.764 ******* 2026-02-27 00:47:11.504699 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5630d52f-55a8-52f3-8c7d-90d730eab2c2', 'data_vg': 'ceph-5630d52f-55a8-52f3-8c7d-90d730eab2c2'})  2026-02-27 00:47:11.504703 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e90026b5-6780-5a31-9cea-c7916e7559fe', 'data_vg': 'ceph-e90026b5-6780-5a31-9cea-c7916e7559fe'})  2026-02-27 00:47:11.504707 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:11.504711 | orchestrator | 2026-02-27 00:47:11.504714 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-27 00:47:11.504718 | orchestrator | Friday 27 February 2026 00:47:09 +0000 (0:00:00.171) 0:01:13.935 ******* 2026-02-27 00:47:11.504722 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:47:11.504727 | orchestrator | 2026-02-27 00:47:11.504733 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-27 00:47:11.504740 | orchestrator | Friday 27 February 2026 00:47:09 +0000 (0:00:00.532) 0:01:14.468 ******* 2026-02-27 00:47:11.504746 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:47:11.504760 | orchestrator | 2026-02-27 00:47:11.504764 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-27 00:47:11.504774 | orchestrator | Friday 27 February 2026 00:47:10 +0000 (0:00:00.541) 0:01:15.009 ******* 2026-02-27 00:47:11.504777 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:47:11.504781 | orchestrator | 2026-02-27 00:47:11.504785 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-27 00:47:11.504788 | orchestrator | Friday 27 February 2026 00:47:10 +0000 (0:00:00.143) 0:01:15.153 ******* 2026-02-27 00:47:11.504794 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-5630d52f-55a8-52f3-8c7d-90d730eab2c2', 'vg_name': 'ceph-5630d52f-55a8-52f3-8c7d-90d730eab2c2'}) 2026-02-27 00:47:11.504801 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-e90026b5-6780-5a31-9cea-c7916e7559fe', 'vg_name': 'ceph-e90026b5-6780-5a31-9cea-c7916e7559fe'}) 2026-02-27 00:47:11.504810 | orchestrator | 2026-02-27 00:47:11.504817 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-27 00:47:11.504824 | orchestrator | Friday 27 February 2026 00:47:10 +0000 (0:00:00.196) 0:01:15.349 ******* 2026-02-27 00:47:11.504858 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5630d52f-55a8-52f3-8c7d-90d730eab2c2', 'data_vg': 'ceph-5630d52f-55a8-52f3-8c7d-90d730eab2c2'})  2026-02-27 00:47:11.504865 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e90026b5-6780-5a31-9cea-c7916e7559fe', 'data_vg': 'ceph-e90026b5-6780-5a31-9cea-c7916e7559fe'})  2026-02-27 00:47:11.504869 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:11.504873 | orchestrator | 2026-02-27 00:47:11.504877 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-27 00:47:11.504881 | orchestrator | Friday 27 February 2026 00:47:10 +0000 (0:00:00.176) 0:01:15.526 ******* 2026-02-27 00:47:11.504885 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5630d52f-55a8-52f3-8c7d-90d730eab2c2', 'data_vg': 'ceph-5630d52f-55a8-52f3-8c7d-90d730eab2c2'})  2026-02-27 00:47:11.504889 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e90026b5-6780-5a31-9cea-c7916e7559fe', 'data_vg': 'ceph-e90026b5-6780-5a31-9cea-c7916e7559fe'})  2026-02-27 00:47:11.504893 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:11.504896 | orchestrator | 2026-02-27 00:47:11.504900 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-27 00:47:11.504904 | orchestrator | Friday 27 February 2026 00:47:11 +0000 (0:00:00.165) 0:01:15.691 ******* 2026-02-27 00:47:11.504907 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5630d52f-55a8-52f3-8c7d-90d730eab2c2', 'data_vg': 'ceph-5630d52f-55a8-52f3-8c7d-90d730eab2c2'})  2026-02-27 00:47:11.504911 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-e90026b5-6780-5a31-9cea-c7916e7559fe', 'data_vg': 'ceph-e90026b5-6780-5a31-9cea-c7916e7559fe'})  2026-02-27 00:47:11.504915 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:11.504919 | orchestrator | 2026-02-27 00:47:11.504922 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-27 00:47:11.504926 | orchestrator | Friday 27 February 2026 00:47:11 +0000 (0:00:00.182) 0:01:15.874 ******* 2026-02-27 00:47:11.504930 | orchestrator | ok: [testbed-node-5] => { 2026-02-27 00:47:11.504934 | orchestrator |  "lvm_report": { 2026-02-27 00:47:11.504938 | orchestrator |  "lv": [ 2026-02-27 00:47:11.504942 | orchestrator |  { 2026-02-27 00:47:11.504947 | orchestrator |  "lv_name": "osd-block-5630d52f-55a8-52f3-8c7d-90d730eab2c2", 2026-02-27 00:47:11.504958 | orchestrator |  "vg_name": "ceph-5630d52f-55a8-52f3-8c7d-90d730eab2c2" 2026-02-27 00:47:11.504965 | orchestrator |  }, 2026-02-27 00:47:11.504971 | orchestrator |  { 2026-02-27 00:47:11.504978 | orchestrator |  "lv_name": "osd-block-e90026b5-6780-5a31-9cea-c7916e7559fe", 2026-02-27 00:47:11.504985 | orchestrator |  "vg_name": "ceph-e90026b5-6780-5a31-9cea-c7916e7559fe" 2026-02-27 00:47:11.504991 | orchestrator |  } 2026-02-27 00:47:11.504998 | orchestrator |  ], 2026-02-27 00:47:11.505005 | orchestrator |  "pv": [ 2026-02-27 00:47:11.505012 | orchestrator |  { 2026-02-27 00:47:11.505018 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-27 00:47:11.505025 | orchestrator |  "vg_name": "ceph-5630d52f-55a8-52f3-8c7d-90d730eab2c2" 2026-02-27 00:47:11.505031 | orchestrator |  }, 2026-02-27 00:47:11.505038 | orchestrator |  { 2026-02-27 00:47:11.505045 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-27 00:47:11.505052 | orchestrator |  "vg_name": "ceph-e90026b5-6780-5a31-9cea-c7916e7559fe" 2026-02-27 00:47:11.505059 | orchestrator |  } 2026-02-27 00:47:11.505066 | orchestrator |  ] 2026-02-27 00:47:11.505073 | orchestrator |  } 2026-02-27 00:47:11.505080 | orchestrator | } 2026-02-27 00:47:11.505091 | orchestrator | 2026-02-27 00:47:11.505098 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 00:47:11.505104 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-27 00:47:11.505108 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-27 00:47:11.505112 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-27 00:47:11.505117 | orchestrator | 2026-02-27 00:47:11.505121 | orchestrator | 2026-02-27 00:47:11.505125 | orchestrator | 2026-02-27 00:47:11.505130 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 00:47:11.505134 | orchestrator | Friday 27 February 2026 00:47:11 +0000 (0:00:00.164) 0:01:16.039 ******* 2026-02-27 00:47:11.505138 | orchestrator | =============================================================================== 2026-02-27 00:47:11.505142 | orchestrator | Create block VGs -------------------------------------------------------- 5.82s 2026-02-27 00:47:11.505147 | orchestrator | Create block LVs -------------------------------------------------------- 4.11s 2026-02-27 00:47:11.505151 | orchestrator | Add known partitions to the list of available block devices ------------- 1.85s 2026-02-27 00:47:11.505155 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.78s 2026-02-27 00:47:11.505160 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.71s 2026-02-27 00:47:11.505164 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.70s 2026-02-27 00:47:11.505168 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.57s 2026-02-27 00:47:11.505173 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.54s 2026-02-27 00:47:11.505181 | orchestrator | Add known links to the list of available block devices ------------------ 1.47s 2026-02-27 00:47:11.964103 | orchestrator | Add known partitions to the list of available block devices ------------- 1.07s 2026-02-27 00:47:11.964263 | orchestrator | Print LVM report data --------------------------------------------------- 0.93s 2026-02-27 00:47:11.964282 | orchestrator | Add known partitions to the list of available block devices ------------- 0.92s 2026-02-27 00:47:11.964293 | orchestrator | Add known links to the list of available block devices ------------------ 0.82s 2026-02-27 00:47:11.964304 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.78s 2026-02-27 00:47:11.964315 | orchestrator | Create WAL LVs for ceph_wal_devices ------------------------------------- 0.76s 2026-02-27 00:47:11.964326 | orchestrator | Add known links to the list of available block devices ------------------ 0.74s 2026-02-27 00:47:11.964337 | orchestrator | Print 'Create WAL LVs for ceph_db_wal_devices' -------------------------- 0.73s 2026-02-27 00:47:11.964347 | orchestrator | Add known partitions to the list of available block devices ------------- 0.72s 2026-02-27 00:47:11.964364 | orchestrator | Calculate size needed for LVs on ceph_wal_devices ----------------------- 0.72s 2026-02-27 00:47:11.964382 | orchestrator | Add known links to the list of available block devices ------------------ 0.71s 2026-02-27 00:47:24.938656 | orchestrator | 2026-02-27 00:47:24 | INFO  | Task 4088b615-5799-42ec-8c94-d65f32252df0 (facts) was prepared for execution. 2026-02-27 00:47:24.939423 | orchestrator | 2026-02-27 00:47:24 | INFO  | It takes a moment until task 4088b615-5799-42ec-8c94-d65f32252df0 (facts) has been started and output is visible here. 2026-02-27 00:47:37.787866 | orchestrator | 2026-02-27 00:47:37.787953 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-27 00:47:37.787967 | orchestrator | 2026-02-27 00:47:37.787978 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-27 00:47:37.787988 | orchestrator | Friday 27 February 2026 00:47:28 +0000 (0:00:00.263) 0:00:00.263 ******* 2026-02-27 00:47:37.788021 | orchestrator | ok: [testbed-manager] 2026-02-27 00:47:37.788032 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:47:37.788042 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:47:37.788051 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:47:37.788061 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:47:37.788070 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:47:37.788080 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:47:37.788089 | orchestrator | 2026-02-27 00:47:37.788099 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-27 00:47:37.788120 | orchestrator | Friday 27 February 2026 00:47:30 +0000 (0:00:01.080) 0:00:01.344 ******* 2026-02-27 00:47:37.788131 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:47:37.788141 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:47:37.788151 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:47:37.788162 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:47:37.788172 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:47:37.788183 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:47:37.788194 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:37.788204 | orchestrator | 2026-02-27 00:47:37.788215 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-27 00:47:37.788225 | orchestrator | 2026-02-27 00:47:37.788236 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-27 00:47:37.788247 | orchestrator | Friday 27 February 2026 00:47:31 +0000 (0:00:01.133) 0:00:02.477 ******* 2026-02-27 00:47:37.788258 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:47:37.788268 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:47:37.788279 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:47:37.788292 | orchestrator | ok: [testbed-manager] 2026-02-27 00:47:37.788310 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:47:37.788328 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:47:37.788346 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:47:37.788365 | orchestrator | 2026-02-27 00:47:37.788383 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-27 00:47:37.788397 | orchestrator | 2026-02-27 00:47:37.788408 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-27 00:47:37.788419 | orchestrator | Friday 27 February 2026 00:47:36 +0000 (0:00:05.815) 0:00:08.293 ******* 2026-02-27 00:47:37.788429 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:47:37.788442 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:47:37.788454 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:47:37.788466 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:47:37.788535 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:47:37.788556 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:47:37.788574 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:47:37.788587 | orchestrator | 2026-02-27 00:47:37.788601 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 00:47:37.788622 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-27 00:47:37.788637 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-27 00:47:37.788654 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-27 00:47:37.788684 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-27 00:47:37.788706 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-27 00:47:37.788724 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-27 00:47:37.788742 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-27 00:47:37.788777 | orchestrator | 2026-02-27 00:47:37.788796 | orchestrator | 2026-02-27 00:47:37.788815 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 00:47:37.788834 | orchestrator | Friday 27 February 2026 00:47:37 +0000 (0:00:00.501) 0:00:08.794 ******* 2026-02-27 00:47:37.788852 | orchestrator | =============================================================================== 2026-02-27 00:47:37.788887 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.82s 2026-02-27 00:47:37.788906 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.13s 2026-02-27 00:47:37.788926 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.08s 2026-02-27 00:47:37.788946 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2026-02-27 00:47:49.886107 | orchestrator | 2026-02-27 00:47:49 | INFO  | Task 33011e01-9969-4225-a9d1-21f5a43ab5b1 (frr) was prepared for execution. 2026-02-27 00:47:49.886405 | orchestrator | 2026-02-27 00:47:49 | INFO  | It takes a moment until task 33011e01-9969-4225-a9d1-21f5a43ab5b1 (frr) has been started and output is visible here. 2026-02-27 00:48:16.681059 | orchestrator | 2026-02-27 00:48:16.681150 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-02-27 00:48:16.681166 | orchestrator | 2026-02-27 00:48:16.681184 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-02-27 00:48:16.681201 | orchestrator | Friday 27 February 2026 00:47:55 +0000 (0:00:00.306) 0:00:00.306 ******* 2026-02-27 00:48:16.681218 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-02-27 00:48:16.681236 | orchestrator | 2026-02-27 00:48:16.681251 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-02-27 00:48:16.681265 | orchestrator | Friday 27 February 2026 00:47:55 +0000 (0:00:00.256) 0:00:00.563 ******* 2026-02-27 00:48:16.681282 | orchestrator | changed: [testbed-manager] 2026-02-27 00:48:16.681299 | orchestrator | 2026-02-27 00:48:16.681317 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-02-27 00:48:16.681334 | orchestrator | Friday 27 February 2026 00:47:57 +0000 (0:00:01.380) 0:00:01.943 ******* 2026-02-27 00:48:16.681366 | orchestrator | changed: [testbed-manager] 2026-02-27 00:48:16.681380 | orchestrator | 2026-02-27 00:48:16.681401 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-02-27 00:48:16.681423 | orchestrator | Friday 27 February 2026 00:48:07 +0000 (0:00:09.951) 0:00:11.895 ******* 2026-02-27 00:48:16.681487 | orchestrator | ok: [testbed-manager] 2026-02-27 00:48:16.681504 | orchestrator | 2026-02-27 00:48:16.681521 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-02-27 00:48:16.681537 | orchestrator | Friday 27 February 2026 00:48:08 +0000 (0:00:00.987) 0:00:12.883 ******* 2026-02-27 00:48:16.681554 | orchestrator | changed: [testbed-manager] 2026-02-27 00:48:16.681571 | orchestrator | 2026-02-27 00:48:16.681588 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-02-27 00:48:16.681605 | orchestrator | Friday 27 February 2026 00:48:09 +0000 (0:00:00.920) 0:00:13.804 ******* 2026-02-27 00:48:16.681622 | orchestrator | ok: [testbed-manager] 2026-02-27 00:48:16.681638 | orchestrator | 2026-02-27 00:48:16.681654 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-02-27 00:48:16.681672 | orchestrator | Friday 27 February 2026 00:48:10 +0000 (0:00:01.100) 0:00:14.904 ******* 2026-02-27 00:48:16.681686 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:48:16.681702 | orchestrator | 2026-02-27 00:48:16.681717 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-02-27 00:48:16.681733 | orchestrator | Friday 27 February 2026 00:48:10 +0000 (0:00:00.131) 0:00:15.036 ******* 2026-02-27 00:48:16.681750 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:48:16.681793 | orchestrator | 2026-02-27 00:48:16.681810 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-02-27 00:48:16.681826 | orchestrator | Friday 27 February 2026 00:48:10 +0000 (0:00:00.139) 0:00:15.176 ******* 2026-02-27 00:48:16.681843 | orchestrator | changed: [testbed-manager] 2026-02-27 00:48:16.681859 | orchestrator | 2026-02-27 00:48:16.681875 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-02-27 00:48:16.681893 | orchestrator | Friday 27 February 2026 00:48:11 +0000 (0:00:00.970) 0:00:16.146 ******* 2026-02-27 00:48:16.681908 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-02-27 00:48:16.681924 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-02-27 00:48:16.681941 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-02-27 00:48:16.681958 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-02-27 00:48:16.681975 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-02-27 00:48:16.681990 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-02-27 00:48:16.682000 | orchestrator | 2026-02-27 00:48:16.682009 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-02-27 00:48:16.682078 | orchestrator | Friday 27 February 2026 00:48:13 +0000 (0:00:02.198) 0:00:18.344 ******* 2026-02-27 00:48:16.682094 | orchestrator | ok: [testbed-manager] 2026-02-27 00:48:16.682110 | orchestrator | 2026-02-27 00:48:16.682126 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-02-27 00:48:16.682143 | orchestrator | Friday 27 February 2026 00:48:15 +0000 (0:00:01.468) 0:00:19.812 ******* 2026-02-27 00:48:16.682159 | orchestrator | changed: [testbed-manager] 2026-02-27 00:48:16.682175 | orchestrator | 2026-02-27 00:48:16.682192 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 00:48:16.682208 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-27 00:48:16.682226 | orchestrator | 2026-02-27 00:48:16.682242 | orchestrator | 2026-02-27 00:48:16.682258 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 00:48:16.682274 | orchestrator | Friday 27 February 2026 00:48:16 +0000 (0:00:01.396) 0:00:21.209 ******* 2026-02-27 00:48:16.682291 | orchestrator | =============================================================================== 2026-02-27 00:48:16.682307 | orchestrator | osism.services.frr : Install frr package -------------------------------- 9.95s 2026-02-27 00:48:16.682323 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.20s 2026-02-27 00:48:16.682339 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.47s 2026-02-27 00:48:16.682356 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.40s 2026-02-27 00:48:16.682372 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.38s 2026-02-27 00:48:16.682410 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.10s 2026-02-27 00:48:16.682426 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 0.99s 2026-02-27 00:48:16.682465 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.97s 2026-02-27 00:48:16.682481 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.92s 2026-02-27 00:48:16.682497 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.26s 2026-02-27 00:48:16.682513 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.14s 2026-02-27 00:48:16.682531 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.13s 2026-02-27 00:48:16.920498 | orchestrator | 2026-02-27 00:48:16.921602 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Fri Feb 27 00:48:16 UTC 2026 2026-02-27 00:48:16.921657 | orchestrator | 2026-02-27 00:48:18.703653 | orchestrator | 2026-02-27 00:48:18 | INFO  | Collection nutshell is prepared for execution 2026-02-27 00:48:18.703754 | orchestrator | 2026-02-27 00:48:18 | INFO  | A [0] - dotfiles 2026-02-27 00:48:28.876302 | orchestrator | 2026-02-27 00:48:28 | INFO  | A [0] - homer 2026-02-27 00:48:28.876411 | orchestrator | 2026-02-27 00:48:28 | INFO  | A [0] - netdata 2026-02-27 00:48:28.876471 | orchestrator | 2026-02-27 00:48:28 | INFO  | A [0] - openstackclient 2026-02-27 00:48:28.876490 | orchestrator | 2026-02-27 00:48:28 | INFO  | A [0] - phpmyadmin 2026-02-27 00:48:28.876511 | orchestrator | 2026-02-27 00:48:28 | INFO  | A [0] - common 2026-02-27 00:48:28.876524 | orchestrator | 2026-02-27 00:48:28 | INFO  | A [1] -- loadbalancer 2026-02-27 00:48:28.876573 | orchestrator | 2026-02-27 00:48:28 | INFO  | A [2] --- opensearch 2026-02-27 00:48:28.876591 | orchestrator | 2026-02-27 00:48:28 | INFO  | A [2] --- mariadb-ng 2026-02-27 00:48:28.877036 | orchestrator | 2026-02-27 00:48:28 | INFO  | A [3] ---- horizon 2026-02-27 00:48:28.877298 | orchestrator | 2026-02-27 00:48:28 | INFO  | A [3] ---- keystone 2026-02-27 00:48:28.877681 | orchestrator | 2026-02-27 00:48:28 | INFO  | A [4] ----- neutron 2026-02-27 00:48:28.877997 | orchestrator | 2026-02-27 00:48:28 | INFO  | A [5] ------ wait-for-nova 2026-02-27 00:48:28.878344 | orchestrator | 2026-02-27 00:48:28 | INFO  | A [6] ------- octavia 2026-02-27 00:48:28.879962 | orchestrator | 2026-02-27 00:48:28 | INFO  | A [4] ----- barbican 2026-02-27 00:48:28.880004 | orchestrator | 2026-02-27 00:48:28 | INFO  | A [4] ----- designate 2026-02-27 00:48:28.880131 | orchestrator | 2026-02-27 00:48:28 | INFO  | A [4] ----- ironic 2026-02-27 00:48:28.880733 | orchestrator | 2026-02-27 00:48:28 | INFO  | A [4] ----- placement 2026-02-27 00:48:28.881906 | orchestrator | 2026-02-27 00:48:28 | INFO  | A [4] ----- magnum 2026-02-27 00:48:28.881947 | orchestrator | 2026-02-27 00:48:28 | INFO  | A [1] -- openvswitch 2026-02-27 00:48:28.881960 | orchestrator | 2026-02-27 00:48:28 | INFO  | A [2] --- ovn 2026-02-27 00:48:28.881972 | orchestrator | 2026-02-27 00:48:28 | INFO  | A [1] -- memcached 2026-02-27 00:48:28.882176 | orchestrator | 2026-02-27 00:48:28 | INFO  | A [1] -- redis 2026-02-27 00:48:28.882546 | orchestrator | 2026-02-27 00:48:28 | INFO  | A [1] -- rabbitmq-ng 2026-02-27 00:48:28.883217 | orchestrator | 2026-02-27 00:48:28 | INFO  | A [0] - kubernetes 2026-02-27 00:48:28.886577 | orchestrator | 2026-02-27 00:48:28 | INFO  | A [1] -- kubeconfig 2026-02-27 00:48:28.887636 | orchestrator | 2026-02-27 00:48:28 | INFO  | A [1] -- copy-kubeconfig 2026-02-27 00:48:28.887665 | orchestrator | 2026-02-27 00:48:28 | INFO  | A [0] - ceph 2026-02-27 00:48:28.888890 | orchestrator | 2026-02-27 00:48:28 | INFO  | A [1] -- ceph-pools 2026-02-27 00:48:28.888983 | orchestrator | 2026-02-27 00:48:28 | INFO  | A [2] --- copy-ceph-keys 2026-02-27 00:48:28.889002 | orchestrator | 2026-02-27 00:48:28 | INFO  | A [3] ---- cephclient 2026-02-27 00:48:28.889017 | orchestrator | 2026-02-27 00:48:28 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-02-27 00:48:28.889031 | orchestrator | 2026-02-27 00:48:28 | INFO  | A [4] ----- wait-for-keystone 2026-02-27 00:48:28.889480 | orchestrator | 2026-02-27 00:48:28 | INFO  | A [5] ------ kolla-ceph-rgw 2026-02-27 00:48:28.889572 | orchestrator | 2026-02-27 00:48:28 | INFO  | A [5] ------ glance 2026-02-27 00:48:28.889583 | orchestrator | 2026-02-27 00:48:28 | INFO  | A [5] ------ cinder 2026-02-27 00:48:28.889616 | orchestrator | 2026-02-27 00:48:28 | INFO  | A [5] ------ nova 2026-02-27 00:48:28.889630 | orchestrator | 2026-02-27 00:48:28 | INFO  | A [4] ----- prometheus 2026-02-27 00:48:28.889745 | orchestrator | 2026-02-27 00:48:28 | INFO  | A [5] ------ grafana 2026-02-27 00:48:29.275646 | orchestrator | 2026-02-27 00:48:29 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-02-27 00:48:29.275748 | orchestrator | 2026-02-27 00:48:29 | INFO  | Tasks are running in the background 2026-02-27 00:48:32.422549 | orchestrator | 2026-02-27 00:48:32 | INFO  | No task IDs specified, wait for all currently running tasks 2026-02-27 00:48:34.564690 | orchestrator | 2026-02-27 00:48:34 | INFO  | Task f93fbae1-a597-42d6-87d3-385f67c9c1a0 is in state STARTED 2026-02-27 00:48:34.566662 | orchestrator | 2026-02-27 00:48:34 | INFO  | Task c71e75ff-ba5a-42db-8fe1-70c11a3fb6f9 is in state STARTED 2026-02-27 00:48:34.572926 | orchestrator | 2026-02-27 00:48:34 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state STARTED 2026-02-27 00:48:34.573475 | orchestrator | 2026-02-27 00:48:34 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:48:34.574461 | orchestrator | 2026-02-27 00:48:34 | INFO  | Task 45ebd985-43df-4a5a-8b3a-02bd26beb037 is in state STARTED 2026-02-27 00:48:34.575341 | orchestrator | 2026-02-27 00:48:34 | INFO  | Task 3ec1fd44-f1be-4702-bb98-4d01ea3e2d7e is in state STARTED 2026-02-27 00:48:34.585449 | orchestrator | 2026-02-27 00:48:34 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:48:34.585541 | orchestrator | 2026-02-27 00:48:34 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:48:37.625861 | orchestrator | 2026-02-27 00:48:37 | INFO  | Task f93fbae1-a597-42d6-87d3-385f67c9c1a0 is in state STARTED 2026-02-27 00:48:37.627895 | orchestrator | 2026-02-27 00:48:37 | INFO  | Task c71e75ff-ba5a-42db-8fe1-70c11a3fb6f9 is in state STARTED 2026-02-27 00:48:37.632616 | orchestrator | 2026-02-27 00:48:37 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state STARTED 2026-02-27 00:48:37.633169 | orchestrator | 2026-02-27 00:48:37 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:48:37.636139 | orchestrator | 2026-02-27 00:48:37 | INFO  | Task 45ebd985-43df-4a5a-8b3a-02bd26beb037 is in state STARTED 2026-02-27 00:48:37.637050 | orchestrator | 2026-02-27 00:48:37 | INFO  | Task 3ec1fd44-f1be-4702-bb98-4d01ea3e2d7e is in state STARTED 2026-02-27 00:48:37.640872 | orchestrator | 2026-02-27 00:48:37 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:48:37.640946 | orchestrator | 2026-02-27 00:48:37 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:48:40.712965 | orchestrator | 2026-02-27 00:48:40 | INFO  | Task f93fbae1-a597-42d6-87d3-385f67c9c1a0 is in state STARTED 2026-02-27 00:48:40.713180 | orchestrator | 2026-02-27 00:48:40 | INFO  | Task c71e75ff-ba5a-42db-8fe1-70c11a3fb6f9 is in state STARTED 2026-02-27 00:48:40.713773 | orchestrator | 2026-02-27 00:48:40 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state STARTED 2026-02-27 00:48:40.715161 | orchestrator | 2026-02-27 00:48:40 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:48:40.719423 | orchestrator | 2026-02-27 00:48:40 | INFO  | Task 45ebd985-43df-4a5a-8b3a-02bd26beb037 is in state STARTED 2026-02-27 00:48:40.720055 | orchestrator | 2026-02-27 00:48:40 | INFO  | Task 3ec1fd44-f1be-4702-bb98-4d01ea3e2d7e is in state STARTED 2026-02-27 00:48:40.721317 | orchestrator | 2026-02-27 00:48:40 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:48:40.721369 | orchestrator | 2026-02-27 00:48:40 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:48:43.764934 | orchestrator | 2026-02-27 00:48:43 | INFO  | Task f93fbae1-a597-42d6-87d3-385f67c9c1a0 is in state STARTED 2026-02-27 00:48:43.767852 | orchestrator | 2026-02-27 00:48:43 | INFO  | Task c71e75ff-ba5a-42db-8fe1-70c11a3fb6f9 is in state STARTED 2026-02-27 00:48:43.769396 | orchestrator | 2026-02-27 00:48:43 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state STARTED 2026-02-27 00:48:43.771867 | orchestrator | 2026-02-27 00:48:43 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:48:43.772742 | orchestrator | 2026-02-27 00:48:43 | INFO  | Task 45ebd985-43df-4a5a-8b3a-02bd26beb037 is in state STARTED 2026-02-27 00:48:43.773624 | orchestrator | 2026-02-27 00:48:43 | INFO  | Task 3ec1fd44-f1be-4702-bb98-4d01ea3e2d7e is in state STARTED 2026-02-27 00:48:43.776072 | orchestrator | 2026-02-27 00:48:43 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:48:43.776129 | orchestrator | 2026-02-27 00:48:43 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:48:46.853814 | orchestrator | 2026-02-27 00:48:46 | INFO  | Task f93fbae1-a597-42d6-87d3-385f67c9c1a0 is in state STARTED 2026-02-27 00:48:46.853917 | orchestrator | 2026-02-27 00:48:46 | INFO  | Task c71e75ff-ba5a-42db-8fe1-70c11a3fb6f9 is in state STARTED 2026-02-27 00:48:46.853939 | orchestrator | 2026-02-27 00:48:46 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state STARTED 2026-02-27 00:48:46.853957 | orchestrator | 2026-02-27 00:48:46 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:48:46.853975 | orchestrator | 2026-02-27 00:48:46 | INFO  | Task 45ebd985-43df-4a5a-8b3a-02bd26beb037 is in state STARTED 2026-02-27 00:48:46.853994 | orchestrator | 2026-02-27 00:48:46 | INFO  | Task 3ec1fd44-f1be-4702-bb98-4d01ea3e2d7e is in state STARTED 2026-02-27 00:48:46.854075 | orchestrator | 2026-02-27 00:48:46 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:48:46.854090 | orchestrator | 2026-02-27 00:48:46 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:48:50.004668 | orchestrator | 2026-02-27 00:48:50 | INFO  | Task f93fbae1-a597-42d6-87d3-385f67c9c1a0 is in state STARTED 2026-02-27 00:48:50.009072 | orchestrator | 2026-02-27 00:48:50 | INFO  | Task c71e75ff-ba5a-42db-8fe1-70c11a3fb6f9 is in state STARTED 2026-02-27 00:48:50.010321 | orchestrator | 2026-02-27 00:48:50 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state STARTED 2026-02-27 00:48:50.014158 | orchestrator | 2026-02-27 00:48:50 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:48:50.016967 | orchestrator | 2026-02-27 00:48:50 | INFO  | Task 45ebd985-43df-4a5a-8b3a-02bd26beb037 is in state STARTED 2026-02-27 00:48:50.021628 | orchestrator | 2026-02-27 00:48:50 | INFO  | Task 3ec1fd44-f1be-4702-bb98-4d01ea3e2d7e is in state STARTED 2026-02-27 00:48:50.025241 | orchestrator | 2026-02-27 00:48:50 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:48:50.025819 | orchestrator | 2026-02-27 00:48:50 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:48:53.093361 | orchestrator | 2026-02-27 00:48:53 | INFO  | Task f93fbae1-a597-42d6-87d3-385f67c9c1a0 is in state STARTED 2026-02-27 00:48:53.093544 | orchestrator | 2026-02-27 00:48:53 | INFO  | Task c71e75ff-ba5a-42db-8fe1-70c11a3fb6f9 is in state STARTED 2026-02-27 00:48:53.094220 | orchestrator | 2026-02-27 00:48:53 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state STARTED 2026-02-27 00:48:53.094879 | orchestrator | 2026-02-27 00:48:53 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:48:53.095382 | orchestrator | 2026-02-27 00:48:53 | INFO  | Task 45ebd985-43df-4a5a-8b3a-02bd26beb037 is in state STARTED 2026-02-27 00:48:53.096912 | orchestrator | 2026-02-27 00:48:53 | INFO  | Task 3ec1fd44-f1be-4702-bb98-4d01ea3e2d7e is in state STARTED 2026-02-27 00:48:53.099056 | orchestrator | 2026-02-27 00:48:53 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:48:53.099085 | orchestrator | 2026-02-27 00:48:53 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:48:56.165178 | orchestrator | 2026-02-27 00:48:56 | INFO  | Task f93fbae1-a597-42d6-87d3-385f67c9c1a0 is in state STARTED 2026-02-27 00:48:56.166764 | orchestrator | 2026-02-27 00:48:56 | INFO  | Task c71e75ff-ba5a-42db-8fe1-70c11a3fb6f9 is in state STARTED 2026-02-27 00:48:56.167431 | orchestrator | 2026-02-27 00:48:56 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state STARTED 2026-02-27 00:48:56.168654 | orchestrator | 2026-02-27 00:48:56 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:48:56.169330 | orchestrator | 2026-02-27 00:48:56 | INFO  | Task 45ebd985-43df-4a5a-8b3a-02bd26beb037 is in state STARTED 2026-02-27 00:48:56.170550 | orchestrator | 2026-02-27 00:48:56 | INFO  | Task 3ec1fd44-f1be-4702-bb98-4d01ea3e2d7e is in state STARTED 2026-02-27 00:48:56.172216 | orchestrator | 2026-02-27 00:48:56 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:48:56.172250 | orchestrator | 2026-02-27 00:48:56 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:48:59.241337 | orchestrator | 2026-02-27 00:48:59 | INFO  | Task f93fbae1-a597-42d6-87d3-385f67c9c1a0 is in state STARTED 2026-02-27 00:48:59.241453 | orchestrator | 2026-02-27 00:48:59 | INFO  | Task c71e75ff-ba5a-42db-8fe1-70c11a3fb6f9 is in state STARTED 2026-02-27 00:48:59.302655 | orchestrator | 2026-02-27 00:48:59 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state STARTED 2026-02-27 00:48:59.302728 | orchestrator | 2026-02-27 00:48:59 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:48:59.302741 | orchestrator | 2026-02-27 00:48:59 | INFO  | Task 45ebd985-43df-4a5a-8b3a-02bd26beb037 is in state STARTED 2026-02-27 00:48:59.325715 | orchestrator | 2026-02-27 00:48:59 | INFO  | Task 3ec1fd44-f1be-4702-bb98-4d01ea3e2d7e is in state STARTED 2026-02-27 00:48:59.337495 | orchestrator | 2026-02-27 00:48:59 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:48:59.337564 | orchestrator | 2026-02-27 00:48:59 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:49:02.568854 | orchestrator | 2026-02-27 00:49:02 | INFO  | Task f93fbae1-a597-42d6-87d3-385f67c9c1a0 is in state STARTED 2026-02-27 00:49:02.568947 | orchestrator | 2026-02-27 00:49:02 | INFO  | Task c71e75ff-ba5a-42db-8fe1-70c11a3fb6f9 is in state STARTED 2026-02-27 00:49:02.568958 | orchestrator | 2026-02-27 00:49:02 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state STARTED 2026-02-27 00:49:02.568967 | orchestrator | 2026-02-27 00:49:02 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:49:02.571025 | orchestrator | 2026-02-27 00:49:02 | INFO  | Task 45ebd985-43df-4a5a-8b3a-02bd26beb037 is in state STARTED 2026-02-27 00:49:02.575301 | orchestrator | 2026-02-27 00:49:02 | INFO  | Task 3ec1fd44-f1be-4702-bb98-4d01ea3e2d7e is in state STARTED 2026-02-27 00:49:02.580514 | orchestrator | 2026-02-27 00:49:02 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:49:02.582080 | orchestrator | 2026-02-27 00:49:02 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:49:05.673044 | orchestrator | 2026-02-27 00:49:05 | INFO  | Task f93fbae1-a597-42d6-87d3-385f67c9c1a0 is in state STARTED 2026-02-27 00:49:05.673123 | orchestrator | 2026-02-27 00:49:05 | INFO  | Task c71e75ff-ba5a-42db-8fe1-70c11a3fb6f9 is in state STARTED 2026-02-27 00:49:05.673132 | orchestrator | 2026-02-27 00:49:05 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state STARTED 2026-02-27 00:49:05.673140 | orchestrator | 2026-02-27 00:49:05 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:49:05.700957 | orchestrator | 2026-02-27 00:49:05.701028 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-02-27 00:49:05.701036 | orchestrator | 2026-02-27 00:49:05.701043 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-02-27 00:49:05.701049 | orchestrator | Friday 27 February 2026 00:48:45 +0000 (0:00:00.503) 0:00:00.503 ******* 2026-02-27 00:49:05.701056 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:49:05.701063 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:49:05.701069 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:49:05.701075 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:49:05.701080 | orchestrator | changed: [testbed-manager] 2026-02-27 00:49:05.701086 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:49:05.701092 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:49:05.701097 | orchestrator | 2026-02-27 00:49:05.701103 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-02-27 00:49:05.701109 | orchestrator | Friday 27 February 2026 00:48:50 +0000 (0:00:05.760) 0:00:06.263 ******* 2026-02-27 00:49:05.701116 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-02-27 00:49:05.701123 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-02-27 00:49:05.701128 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-02-27 00:49:05.701134 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-02-27 00:49:05.701140 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-02-27 00:49:05.701146 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-02-27 00:49:05.701151 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-02-27 00:49:05.701157 | orchestrator | 2026-02-27 00:49:05.701163 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-02-27 00:49:05.701169 | orchestrator | Friday 27 February 2026 00:48:53 +0000 (0:00:02.846) 0:00:09.109 ******* 2026-02-27 00:49:05.701178 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-27 00:48:52.045487', 'end': '2026-02-27 00:48:52.060466', 'delta': '0:00:00.014979', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-27 00:49:05.701191 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-27 00:48:51.947711', 'end': '2026-02-27 00:48:51.958782', 'delta': '0:00:00.011071', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-27 00:49:05.701221 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-27 00:48:51.986968', 'end': '2026-02-27 00:48:51.995414', 'delta': '0:00:00.008446', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-27 00:49:05.701245 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-27 00:48:52.317718', 'end': '2026-02-27 00:48:52.326410', 'delta': '0:00:00.008692', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-27 00:49:05.701252 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-27 00:48:52.739408', 'end': '2026-02-27 00:48:52.745926', 'delta': '0:00:00.006518', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-27 00:49:05.701259 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-27 00:48:52.706544', 'end': '2026-02-27 00:48:52.715350', 'delta': '0:00:00.008806', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-27 00:49:05.701265 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-02-27 00:48:52.919384', 'end': '2026-02-27 00:48:52.928228', 'delta': '0:00:00.008844', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-02-27 00:49:05.701276 | orchestrator | 2026-02-27 00:49:05.701282 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-02-27 00:49:05.701288 | orchestrator | Friday 27 February 2026 00:48:56 +0000 (0:00:02.762) 0:00:11.872 ******* 2026-02-27 00:49:05.701294 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-02-27 00:49:05.701300 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-02-27 00:49:05.701309 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-02-27 00:49:05.701315 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-02-27 00:49:05.701321 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-02-27 00:49:05.701327 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-02-27 00:49:05.701333 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-02-27 00:49:05.701338 | orchestrator | 2026-02-27 00:49:05.701344 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-02-27 00:49:05.701350 | orchestrator | Friday 27 February 2026 00:48:59 +0000 (0:00:02.479) 0:00:14.351 ******* 2026-02-27 00:49:05.701356 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-02-27 00:49:05.701362 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-02-27 00:49:05.701368 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-02-27 00:49:05.701374 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-02-27 00:49:05.701380 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-02-27 00:49:05.701423 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-02-27 00:49:05.701429 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-02-27 00:49:05.701435 | orchestrator | 2026-02-27 00:49:05.701441 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 00:49:05.701451 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 00:49:05.701459 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 00:49:05.701465 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 00:49:05.701471 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 00:49:05.701477 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 00:49:05.701482 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 00:49:05.701488 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 00:49:05.701494 | orchestrator | 2026-02-27 00:49:05.701500 | orchestrator | 2026-02-27 00:49:05.701506 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 00:49:05.701513 | orchestrator | Friday 27 February 2026 00:49:03 +0000 (0:00:04.068) 0:00:18.419 ******* 2026-02-27 00:49:05.701520 | orchestrator | =============================================================================== 2026-02-27 00:49:05.701526 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 5.76s 2026-02-27 00:49:05.701545 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 4.07s 2026-02-27 00:49:05.701552 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.85s 2026-02-27 00:49:05.701559 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.76s 2026-02-27 00:49:05.701566 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.48s 2026-02-27 00:49:05.701572 | orchestrator | 2026-02-27 00:49:05 | INFO  | Task 45ebd985-43df-4a5a-8b3a-02bd26beb037 is in state STARTED 2026-02-27 00:49:05.701579 | orchestrator | 2026-02-27 00:49:05 | INFO  | Task 3ec1fd44-f1be-4702-bb98-4d01ea3e2d7e is in state SUCCESS 2026-02-27 00:49:05.701586 | orchestrator | 2026-02-27 00:49:05 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:49:05.701593 | orchestrator | 2026-02-27 00:49:05 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:49:09.074118 | orchestrator | 2026-02-27 00:49:08 | INFO  | Task f93fbae1-a597-42d6-87d3-385f67c9c1a0 is in state STARTED 2026-02-27 00:49:09.074320 | orchestrator | 2026-02-27 00:49:08 | INFO  | Task c71e75ff-ba5a-42db-8fe1-70c11a3fb6f9 is in state STARTED 2026-02-27 00:49:09.074410 | orchestrator | 2026-02-27 00:49:08 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state STARTED 2026-02-27 00:49:09.074433 | orchestrator | 2026-02-27 00:49:08 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:49:09.074451 | orchestrator | 2026-02-27 00:49:08 | INFO  | Task 8b5e28aa-eb91-4aa7-ab1d-5f9dd6958676 is in state STARTED 2026-02-27 00:49:09.074470 | orchestrator | 2026-02-27 00:49:08 | INFO  | Task 45ebd985-43df-4a5a-8b3a-02bd26beb037 is in state STARTED 2026-02-27 00:49:09.074489 | orchestrator | 2026-02-27 00:49:08 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:49:09.074508 | orchestrator | 2026-02-27 00:49:08 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:49:12.118314 | orchestrator | 2026-02-27 00:49:12 | INFO  | Task f93fbae1-a597-42d6-87d3-385f67c9c1a0 is in state STARTED 2026-02-27 00:49:12.121813 | orchestrator | 2026-02-27 00:49:12 | INFO  | Task c71e75ff-ba5a-42db-8fe1-70c11a3fb6f9 is in state STARTED 2026-02-27 00:49:12.123266 | orchestrator | 2026-02-27 00:49:12 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state STARTED 2026-02-27 00:49:12.125344 | orchestrator | 2026-02-27 00:49:12 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:49:12.126171 | orchestrator | 2026-02-27 00:49:12 | INFO  | Task 8b5e28aa-eb91-4aa7-ab1d-5f9dd6958676 is in state STARTED 2026-02-27 00:49:12.127101 | orchestrator | 2026-02-27 00:49:12 | INFO  | Task 45ebd985-43df-4a5a-8b3a-02bd26beb037 is in state STARTED 2026-02-27 00:49:12.127653 | orchestrator | 2026-02-27 00:49:12 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:49:12.128283 | orchestrator | 2026-02-27 00:49:12 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:49:15.225349 | orchestrator | 2026-02-27 00:49:15 | INFO  | Task f93fbae1-a597-42d6-87d3-385f67c9c1a0 is in state STARTED 2026-02-27 00:49:15.229526 | orchestrator | 2026-02-27 00:49:15 | INFO  | Task c71e75ff-ba5a-42db-8fe1-70c11a3fb6f9 is in state STARTED 2026-02-27 00:49:15.231158 | orchestrator | 2026-02-27 00:49:15 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state STARTED 2026-02-27 00:49:15.232572 | orchestrator | 2026-02-27 00:49:15 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:49:15.234528 | orchestrator | 2026-02-27 00:49:15 | INFO  | Task 8b5e28aa-eb91-4aa7-ab1d-5f9dd6958676 is in state STARTED 2026-02-27 00:49:15.235834 | orchestrator | 2026-02-27 00:49:15 | INFO  | Task 45ebd985-43df-4a5a-8b3a-02bd26beb037 is in state STARTED 2026-02-27 00:49:15.238786 | orchestrator | 2026-02-27 00:49:15 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:49:15.238841 | orchestrator | 2026-02-27 00:49:15 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:49:18.301911 | orchestrator | 2026-02-27 00:49:18 | INFO  | Task f93fbae1-a597-42d6-87d3-385f67c9c1a0 is in state STARTED 2026-02-27 00:49:18.302553 | orchestrator | 2026-02-27 00:49:18 | INFO  | Task c71e75ff-ba5a-42db-8fe1-70c11a3fb6f9 is in state STARTED 2026-02-27 00:49:18.304125 | orchestrator | 2026-02-27 00:49:18 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state STARTED 2026-02-27 00:49:18.309239 | orchestrator | 2026-02-27 00:49:18 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:49:18.310855 | orchestrator | 2026-02-27 00:49:18 | INFO  | Task 8b5e28aa-eb91-4aa7-ab1d-5f9dd6958676 is in state STARTED 2026-02-27 00:49:18.311869 | orchestrator | 2026-02-27 00:49:18 | INFO  | Task 45ebd985-43df-4a5a-8b3a-02bd26beb037 is in state STARTED 2026-02-27 00:49:18.313035 | orchestrator | 2026-02-27 00:49:18 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:49:18.313081 | orchestrator | 2026-02-27 00:49:18 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:49:21.409329 | orchestrator | 2026-02-27 00:49:21 | INFO  | Task f93fbae1-a597-42d6-87d3-385f67c9c1a0 is in state STARTED 2026-02-27 00:49:21.409493 | orchestrator | 2026-02-27 00:49:21 | INFO  | Task c71e75ff-ba5a-42db-8fe1-70c11a3fb6f9 is in state STARTED 2026-02-27 00:49:21.409510 | orchestrator | 2026-02-27 00:49:21 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state STARTED 2026-02-27 00:49:21.409522 | orchestrator | 2026-02-27 00:49:21 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:49:21.409533 | orchestrator | 2026-02-27 00:49:21 | INFO  | Task 8b5e28aa-eb91-4aa7-ab1d-5f9dd6958676 is in state STARTED 2026-02-27 00:49:21.409544 | orchestrator | 2026-02-27 00:49:21 | INFO  | Task 45ebd985-43df-4a5a-8b3a-02bd26beb037 is in state STARTED 2026-02-27 00:49:21.409555 | orchestrator | 2026-02-27 00:49:21 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:49:21.409816 | orchestrator | 2026-02-27 00:49:21 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:49:24.639890 | orchestrator | 2026-02-27 00:49:24 | INFO  | Task f93fbae1-a597-42d6-87d3-385f67c9c1a0 is in state STARTED 2026-02-27 00:49:24.642310 | orchestrator | 2026-02-27 00:49:24 | INFO  | Task c71e75ff-ba5a-42db-8fe1-70c11a3fb6f9 is in state STARTED 2026-02-27 00:49:24.646444 | orchestrator | 2026-02-27 00:49:24 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state STARTED 2026-02-27 00:49:24.653100 | orchestrator | 2026-02-27 00:49:24 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:49:24.653211 | orchestrator | 2026-02-27 00:49:24 | INFO  | Task 8b5e28aa-eb91-4aa7-ab1d-5f9dd6958676 is in state STARTED 2026-02-27 00:49:24.655530 | orchestrator | 2026-02-27 00:49:24 | INFO  | Task 45ebd985-43df-4a5a-8b3a-02bd26beb037 is in state STARTED 2026-02-27 00:49:24.658064 | orchestrator | 2026-02-27 00:49:24 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:49:24.658499 | orchestrator | 2026-02-27 00:49:24 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:49:27.730864 | orchestrator | 2026-02-27 00:49:27 | INFO  | Task f93fbae1-a597-42d6-87d3-385f67c9c1a0 is in state SUCCESS 2026-02-27 00:49:27.732324 | orchestrator | 2026-02-27 00:49:27 | INFO  | Task c71e75ff-ba5a-42db-8fe1-70c11a3fb6f9 is in state STARTED 2026-02-27 00:49:27.734184 | orchestrator | 2026-02-27 00:49:27 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state STARTED 2026-02-27 00:49:27.735789 | orchestrator | 2026-02-27 00:49:27 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:49:27.738436 | orchestrator | 2026-02-27 00:49:27 | INFO  | Task 8b5e28aa-eb91-4aa7-ab1d-5f9dd6958676 is in state STARTED 2026-02-27 00:49:27.748996 | orchestrator | 2026-02-27 00:49:27 | INFO  | Task 45ebd985-43df-4a5a-8b3a-02bd26beb037 is in state STARTED 2026-02-27 00:49:27.750264 | orchestrator | 2026-02-27 00:49:27 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:49:27.750312 | orchestrator | 2026-02-27 00:49:27 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:49:31.588246 | orchestrator | 2026-02-27 00:49:30 | INFO  | Task c71e75ff-ba5a-42db-8fe1-70c11a3fb6f9 is in state STARTED 2026-02-27 00:49:31.588353 | orchestrator | 2026-02-27 00:49:30 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state STARTED 2026-02-27 00:49:31.588447 | orchestrator | 2026-02-27 00:49:30 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:49:31.588456 | orchestrator | 2026-02-27 00:49:30 | INFO  | Task 8b5e28aa-eb91-4aa7-ab1d-5f9dd6958676 is in state STARTED 2026-02-27 00:49:31.588463 | orchestrator | 2026-02-27 00:49:30 | INFO  | Task 45ebd985-43df-4a5a-8b3a-02bd26beb037 is in state STARTED 2026-02-27 00:49:31.588470 | orchestrator | 2026-02-27 00:49:30 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:49:31.588478 | orchestrator | 2026-02-27 00:49:30 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:49:34.791993 | orchestrator | 2026-02-27 00:49:33 | INFO  | Task c71e75ff-ba5a-42db-8fe1-70c11a3fb6f9 is in state STARTED 2026-02-27 00:49:34.792108 | orchestrator | 2026-02-27 00:49:33 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state STARTED 2026-02-27 00:49:34.792132 | orchestrator | 2026-02-27 00:49:33 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:49:34.792169 | orchestrator | 2026-02-27 00:49:33 | INFO  | Task 8b5e28aa-eb91-4aa7-ab1d-5f9dd6958676 is in state STARTED 2026-02-27 00:49:34.792181 | orchestrator | 2026-02-27 00:49:33 | INFO  | Task 45ebd985-43df-4a5a-8b3a-02bd26beb037 is in state STARTED 2026-02-27 00:49:34.792192 | orchestrator | 2026-02-27 00:49:33 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:49:34.792203 | orchestrator | 2026-02-27 00:49:33 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:49:37.090826 | orchestrator | 2026-02-27 00:49:37 | INFO  | Task c71e75ff-ba5a-42db-8fe1-70c11a3fb6f9 is in state STARTED 2026-02-27 00:49:37.090930 | orchestrator | 2026-02-27 00:49:37 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state STARTED 2026-02-27 00:49:37.090946 | orchestrator | 2026-02-27 00:49:37 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:49:37.090958 | orchestrator | 2026-02-27 00:49:37 | INFO  | Task 8b5e28aa-eb91-4aa7-ab1d-5f9dd6958676 is in state STARTED 2026-02-27 00:49:37.090969 | orchestrator | 2026-02-27 00:49:37 | INFO  | Task 45ebd985-43df-4a5a-8b3a-02bd26beb037 is in state STARTED 2026-02-27 00:49:37.090980 | orchestrator | 2026-02-27 00:49:37 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:49:37.090991 | orchestrator | 2026-02-27 00:49:37 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:49:40.133910 | orchestrator | 2026-02-27 00:49:40 | INFO  | Task c71e75ff-ba5a-42db-8fe1-70c11a3fb6f9 is in state STARTED 2026-02-27 00:49:40.136614 | orchestrator | 2026-02-27 00:49:40 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state STARTED 2026-02-27 00:49:40.139618 | orchestrator | 2026-02-27 00:49:40 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:49:40.143097 | orchestrator | 2026-02-27 00:49:40 | INFO  | Task 8b5e28aa-eb91-4aa7-ab1d-5f9dd6958676 is in state STARTED 2026-02-27 00:49:40.143156 | orchestrator | 2026-02-27 00:49:40 | INFO  | Task 45ebd985-43df-4a5a-8b3a-02bd26beb037 is in state STARTED 2026-02-27 00:49:40.146978 | orchestrator | 2026-02-27 00:49:40 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:49:40.147062 | orchestrator | 2026-02-27 00:49:40 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:49:43.696336 | orchestrator | 2026-02-27 00:49:43 | INFO  | Task c71e75ff-ba5a-42db-8fe1-70c11a3fb6f9 is in state STARTED 2026-02-27 00:49:43.696512 | orchestrator | 2026-02-27 00:49:43 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state STARTED 2026-02-27 00:49:43.696535 | orchestrator | 2026-02-27 00:49:43 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:49:43.696553 | orchestrator | 2026-02-27 00:49:43 | INFO  | Task 8b5e28aa-eb91-4aa7-ab1d-5f9dd6958676 is in state STARTED 2026-02-27 00:49:43.696569 | orchestrator | 2026-02-27 00:49:43 | INFO  | Task 45ebd985-43df-4a5a-8b3a-02bd26beb037 is in state STARTED 2026-02-27 00:49:43.696586 | orchestrator | 2026-02-27 00:49:43 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:49:43.696603 | orchestrator | 2026-02-27 00:49:43 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:49:46.374744 | orchestrator | 2026-02-27 00:49:46 | INFO  | Task c71e75ff-ba5a-42db-8fe1-70c11a3fb6f9 is in state SUCCESS 2026-02-27 00:49:46.376340 | orchestrator | 2026-02-27 00:49:46 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state STARTED 2026-02-27 00:49:46.377248 | orchestrator | 2026-02-27 00:49:46 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:49:46.378596 | orchestrator | 2026-02-27 00:49:46 | INFO  | Task 8b5e28aa-eb91-4aa7-ab1d-5f9dd6958676 is in state STARTED 2026-02-27 00:49:46.379752 | orchestrator | 2026-02-27 00:49:46 | INFO  | Task 45ebd985-43df-4a5a-8b3a-02bd26beb037 is in state STARTED 2026-02-27 00:49:46.380264 | orchestrator | 2026-02-27 00:49:46 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:49:46.380314 | orchestrator | 2026-02-27 00:49:46 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:49:49.449113 | orchestrator | 2026-02-27 00:49:49 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state STARTED 2026-02-27 00:49:49.453012 | orchestrator | 2026-02-27 00:49:49 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:49:49.458309 | orchestrator | 2026-02-27 00:49:49 | INFO  | Task 8b5e28aa-eb91-4aa7-ab1d-5f9dd6958676 is in state STARTED 2026-02-27 00:49:49.463509 | orchestrator | 2026-02-27 00:49:49 | INFO  | Task 45ebd985-43df-4a5a-8b3a-02bd26beb037 is in state STARTED 2026-02-27 00:49:49.465819 | orchestrator | 2026-02-27 00:49:49 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:49:49.465886 | orchestrator | 2026-02-27 00:49:49 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:49:52.517236 | orchestrator | 2026-02-27 00:49:52 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state STARTED 2026-02-27 00:49:52.519272 | orchestrator | 2026-02-27 00:49:52 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:49:52.521614 | orchestrator | 2026-02-27 00:49:52 | INFO  | Task 8b5e28aa-eb91-4aa7-ab1d-5f9dd6958676 is in state STARTED 2026-02-27 00:49:52.523187 | orchestrator | 2026-02-27 00:49:52 | INFO  | Task 45ebd985-43df-4a5a-8b3a-02bd26beb037 is in state STARTED 2026-02-27 00:49:52.524628 | orchestrator | 2026-02-27 00:49:52 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:49:52.525012 | orchestrator | 2026-02-27 00:49:52 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:49:55.582157 | orchestrator | 2026-02-27 00:49:55 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state STARTED 2026-02-27 00:49:55.584695 | orchestrator | 2026-02-27 00:49:55 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:49:55.584711 | orchestrator | 2026-02-27 00:49:55 | INFO  | Task 8b5e28aa-eb91-4aa7-ab1d-5f9dd6958676 is in state STARTED 2026-02-27 00:49:55.585241 | orchestrator | 2026-02-27 00:49:55 | INFO  | Task 45ebd985-43df-4a5a-8b3a-02bd26beb037 is in state STARTED 2026-02-27 00:49:55.586539 | orchestrator | 2026-02-27 00:49:55 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:49:55.586650 | orchestrator | 2026-02-27 00:49:55 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:49:58.669136 | orchestrator | 2026-02-27 00:49:58 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state STARTED 2026-02-27 00:49:58.669915 | orchestrator | 2026-02-27 00:49:58 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:49:58.670012 | orchestrator | 2026-02-27 00:49:58 | INFO  | Task 8b5e28aa-eb91-4aa7-ab1d-5f9dd6958676 is in state STARTED 2026-02-27 00:49:58.675748 | orchestrator | 2026-02-27 00:49:58 | INFO  | Task 45ebd985-43df-4a5a-8b3a-02bd26beb037 is in state STARTED 2026-02-27 00:49:58.675811 | orchestrator | 2026-02-27 00:49:58 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:49:58.675821 | orchestrator | 2026-02-27 00:49:58 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:50:01.770382 | orchestrator | 2026-02-27 00:50:01 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state STARTED 2026-02-27 00:50:01.770662 | orchestrator | 2026-02-27 00:50:01 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:50:01.772347 | orchestrator | 2026-02-27 00:50:01 | INFO  | Task 8b5e28aa-eb91-4aa7-ab1d-5f9dd6958676 is in state STARTED 2026-02-27 00:50:01.774057 | orchestrator | 2026-02-27 00:50:01 | INFO  | Task 45ebd985-43df-4a5a-8b3a-02bd26beb037 is in state STARTED 2026-02-27 00:50:01.776309 | orchestrator | 2026-02-27 00:50:01 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:50:01.776461 | orchestrator | 2026-02-27 00:50:01 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:50:04.832456 | orchestrator | 2026-02-27 00:50:04 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state STARTED 2026-02-27 00:50:04.835551 | orchestrator | 2026-02-27 00:50:04 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:50:04.837921 | orchestrator | 2026-02-27 00:50:04 | INFO  | Task 8b5e28aa-eb91-4aa7-ab1d-5f9dd6958676 is in state STARTED 2026-02-27 00:50:04.849544 | orchestrator | 2026-02-27 00:50:04 | INFO  | Task 45ebd985-43df-4a5a-8b3a-02bd26beb037 is in state STARTED 2026-02-27 00:50:04.858116 | orchestrator | 2026-02-27 00:50:04 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:50:04.858188 | orchestrator | 2026-02-27 00:50:04 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:50:07.903164 | orchestrator | 2026-02-27 00:50:07 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state STARTED 2026-02-27 00:50:07.905249 | orchestrator | 2026-02-27 00:50:07 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:50:07.906241 | orchestrator | 2026-02-27 00:50:07 | INFO  | Task 8b5e28aa-eb91-4aa7-ab1d-5f9dd6958676 is in state STARTED 2026-02-27 00:50:07.907975 | orchestrator | 2026-02-27 00:50:07 | INFO  | Task 45ebd985-43df-4a5a-8b3a-02bd26beb037 is in state STARTED 2026-02-27 00:50:07.910406 | orchestrator | 2026-02-27 00:50:07 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:50:07.910777 | orchestrator | 2026-02-27 00:50:07 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:50:10.951426 | orchestrator | 2026-02-27 00:50:10 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state STARTED 2026-02-27 00:50:10.954690 | orchestrator | 2026-02-27 00:50:10 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:50:10.955267 | orchestrator | 2026-02-27 00:50:10 | INFO  | Task 8b5e28aa-eb91-4aa7-ab1d-5f9dd6958676 is in state STARTED 2026-02-27 00:50:10.956655 | orchestrator | 2026-02-27 00:50:10 | INFO  | Task 45ebd985-43df-4a5a-8b3a-02bd26beb037 is in state STARTED 2026-02-27 00:50:10.958171 | orchestrator | 2026-02-27 00:50:10 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:50:10.958204 | orchestrator | 2026-02-27 00:50:10 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:50:14.108815 | orchestrator | 2026-02-27 00:50:14 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state STARTED 2026-02-27 00:50:14.108924 | orchestrator | 2026-02-27 00:50:14 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:50:14.108943 | orchestrator | 2026-02-27 00:50:14 | INFO  | Task 8b5e28aa-eb91-4aa7-ab1d-5f9dd6958676 is in state STARTED 2026-02-27 00:50:14.108956 | orchestrator | 2026-02-27 00:50:14 | INFO  | Task 45ebd985-43df-4a5a-8b3a-02bd26beb037 is in state STARTED 2026-02-27 00:50:14.108970 | orchestrator | 2026-02-27 00:50:14 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:50:14.108984 | orchestrator | 2026-02-27 00:50:14 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:50:17.138668 | orchestrator | 2026-02-27 00:50:17 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state STARTED 2026-02-27 00:50:17.138758 | orchestrator | 2026-02-27 00:50:17 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:50:17.138775 | orchestrator | 2026-02-27 00:50:17 | INFO  | Task 8b5e28aa-eb91-4aa7-ab1d-5f9dd6958676 is in state STARTED 2026-02-27 00:50:17.138790 | orchestrator | 2026-02-27 00:50:17 | INFO  | Task 45ebd985-43df-4a5a-8b3a-02bd26beb037 is in state STARTED 2026-02-27 00:50:17.139685 | orchestrator | 2026-02-27 00:50:17 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:50:17.139709 | orchestrator | 2026-02-27 00:50:17 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:50:20.183026 | orchestrator | 2026-02-27 00:50:20 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state STARTED 2026-02-27 00:50:20.185249 | orchestrator | 2026-02-27 00:50:20 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:50:20.186718 | orchestrator | 2026-02-27 00:50:20 | INFO  | Task 8b5e28aa-eb91-4aa7-ab1d-5f9dd6958676 is in state STARTED 2026-02-27 00:50:20.188278 | orchestrator | 2026-02-27 00:50:20 | INFO  | Task 45ebd985-43df-4a5a-8b3a-02bd26beb037 is in state STARTED 2026-02-27 00:50:20.190246 | orchestrator | 2026-02-27 00:50:20 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:50:20.190292 | orchestrator | 2026-02-27 00:50:20 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:50:23.229183 | orchestrator | 2026-02-27 00:50:23 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state STARTED 2026-02-27 00:50:23.230885 | orchestrator | 2026-02-27 00:50:23 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:50:23.232552 | orchestrator | 2026-02-27 00:50:23 | INFO  | Task 8b5e28aa-eb91-4aa7-ab1d-5f9dd6958676 is in state STARTED 2026-02-27 00:50:23.236336 | orchestrator | 2026-02-27 00:50:23 | INFO  | Task 45ebd985-43df-4a5a-8b3a-02bd26beb037 is in state STARTED 2026-02-27 00:50:23.239436 | orchestrator | 2026-02-27 00:50:23 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:50:23.239476 | orchestrator | 2026-02-27 00:50:23 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:50:26.357778 | orchestrator | 2026-02-27 00:50:26 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state STARTED 2026-02-27 00:50:26.358577 | orchestrator | 2026-02-27 00:50:26 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:50:26.361399 | orchestrator | 2026-02-27 00:50:26 | INFO  | Task 8b5e28aa-eb91-4aa7-ab1d-5f9dd6958676 is in state STARTED 2026-02-27 00:50:26.365607 | orchestrator | 2026-02-27 00:50:26 | INFO  | Task 45ebd985-43df-4a5a-8b3a-02bd26beb037 is in state STARTED 2026-02-27 00:50:26.370429 | orchestrator | 2026-02-27 00:50:26 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:50:26.370499 | orchestrator | 2026-02-27 00:50:26 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:50:29.494520 | orchestrator | 2026-02-27 00:50:29.494629 | orchestrator | 2026-02-27 00:50:29.494647 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-02-27 00:50:29.494660 | orchestrator | 2026-02-27 00:50:29.494673 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-02-27 00:50:29.494686 | orchestrator | Friday 27 February 2026 00:48:45 +0000 (0:00:01.344) 0:00:01.344 ******* 2026-02-27 00:50:29.494698 | orchestrator | ok: [testbed-manager] => { 2026-02-27 00:50:29.494711 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-02-27 00:50:29.494724 | orchestrator | } 2026-02-27 00:50:29.494736 | orchestrator | 2026-02-27 00:50:29.494747 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-02-27 00:50:29.494759 | orchestrator | Friday 27 February 2026 00:48:46 +0000 (0:00:00.591) 0:00:01.935 ******* 2026-02-27 00:50:29.494770 | orchestrator | ok: [testbed-manager] 2026-02-27 00:50:29.494783 | orchestrator | 2026-02-27 00:50:29.494794 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-02-27 00:50:29.494806 | orchestrator | Friday 27 February 2026 00:48:48 +0000 (0:00:02.030) 0:00:03.966 ******* 2026-02-27 00:50:29.494818 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-02-27 00:50:29.494829 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-02-27 00:50:29.494841 | orchestrator | 2026-02-27 00:50:29.494853 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-02-27 00:50:29.494865 | orchestrator | Friday 27 February 2026 00:48:50 +0000 (0:00:02.292) 0:00:06.258 ******* 2026-02-27 00:50:29.494876 | orchestrator | changed: [testbed-manager] 2026-02-27 00:50:29.494888 | orchestrator | 2026-02-27 00:50:29.494899 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-02-27 00:50:29.494911 | orchestrator | Friday 27 February 2026 00:48:53 +0000 (0:00:03.159) 0:00:09.418 ******* 2026-02-27 00:50:29.494922 | orchestrator | changed: [testbed-manager] 2026-02-27 00:50:29.495007 | orchestrator | 2026-02-27 00:50:29.495022 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-02-27 00:50:29.495051 | orchestrator | Friday 27 February 2026 00:48:55 +0000 (0:00:02.073) 0:00:11.492 ******* 2026-02-27 00:50:29.495064 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-02-27 00:50:29.495076 | orchestrator | ok: [testbed-manager] 2026-02-27 00:50:29.495089 | orchestrator | 2026-02-27 00:50:29.495102 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-02-27 00:50:29.495114 | orchestrator | Friday 27 February 2026 00:49:22 +0000 (0:00:26.499) 0:00:37.991 ******* 2026-02-27 00:50:29.495126 | orchestrator | changed: [testbed-manager] 2026-02-27 00:50:29.495138 | orchestrator | 2026-02-27 00:50:29.495150 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 00:50:29.495164 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 00:50:29.495178 | orchestrator | 2026-02-27 00:50:29.495190 | orchestrator | 2026-02-27 00:50:29.495203 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 00:50:29.495216 | orchestrator | Friday 27 February 2026 00:49:24 +0000 (0:00:02.167) 0:00:40.158 ******* 2026-02-27 00:50:29.495228 | orchestrator | =============================================================================== 2026-02-27 00:50:29.495241 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 26.50s 2026-02-27 00:50:29.495253 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 3.16s 2026-02-27 00:50:29.495266 | orchestrator | osism.services.homer : Create required directories ---------------------- 2.29s 2026-02-27 00:50:29.495278 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.17s 2026-02-27 00:50:29.495291 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.07s 2026-02-27 00:50:29.495325 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.03s 2026-02-27 00:50:29.495337 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.59s 2026-02-27 00:50:29.495348 | orchestrator | 2026-02-27 00:50:29.495359 | orchestrator | 2026-02-27 00:50:29.495370 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-02-27 00:50:29.495380 | orchestrator | 2026-02-27 00:50:29.495391 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-02-27 00:50:29.495402 | orchestrator | Friday 27 February 2026 00:48:45 +0000 (0:00:01.010) 0:00:01.010 ******* 2026-02-27 00:50:29.495413 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-02-27 00:50:29.495425 | orchestrator | 2026-02-27 00:50:29.495436 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-02-27 00:50:29.495446 | orchestrator | Friday 27 February 2026 00:48:46 +0000 (0:00:01.248) 0:00:02.259 ******* 2026-02-27 00:50:29.495457 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-02-27 00:50:29.495467 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-02-27 00:50:29.495478 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-02-27 00:50:29.495489 | orchestrator | 2026-02-27 00:50:29.495500 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-02-27 00:50:29.495510 | orchestrator | Friday 27 February 2026 00:48:49 +0000 (0:00:02.766) 0:00:05.026 ******* 2026-02-27 00:50:29.495521 | orchestrator | changed: [testbed-manager] 2026-02-27 00:50:29.495532 | orchestrator | 2026-02-27 00:50:29.495543 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-02-27 00:50:29.495554 | orchestrator | Friday 27 February 2026 00:48:52 +0000 (0:00:02.686) 0:00:07.713 ******* 2026-02-27 00:50:29.495584 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-02-27 00:50:29.495606 | orchestrator | ok: [testbed-manager] 2026-02-27 00:50:29.495617 | orchestrator | 2026-02-27 00:50:29.495627 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-02-27 00:50:29.495638 | orchestrator | Friday 27 February 2026 00:49:29 +0000 (0:00:37.509) 0:00:45.222 ******* 2026-02-27 00:50:29.495649 | orchestrator | changed: [testbed-manager] 2026-02-27 00:50:29.495660 | orchestrator | 2026-02-27 00:50:29.495670 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-02-27 00:50:29.495681 | orchestrator | Friday 27 February 2026 00:49:33 +0000 (0:00:03.445) 0:00:48.667 ******* 2026-02-27 00:50:29.495692 | orchestrator | ok: [testbed-manager] 2026-02-27 00:50:29.495702 | orchestrator | 2026-02-27 00:50:29.495727 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-02-27 00:50:29.495738 | orchestrator | Friday 27 February 2026 00:49:36 +0000 (0:00:03.599) 0:00:52.267 ******* 2026-02-27 00:50:29.495749 | orchestrator | changed: [testbed-manager] 2026-02-27 00:50:29.495760 | orchestrator | 2026-02-27 00:50:29.495783 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-02-27 00:50:29.495794 | orchestrator | Friday 27 February 2026 00:49:40 +0000 (0:00:03.654) 0:00:55.921 ******* 2026-02-27 00:50:29.495804 | orchestrator | changed: [testbed-manager] 2026-02-27 00:50:29.495815 | orchestrator | 2026-02-27 00:50:29.495826 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-02-27 00:50:29.495836 | orchestrator | Friday 27 February 2026 00:49:41 +0000 (0:00:01.057) 0:00:56.978 ******* 2026-02-27 00:50:29.495847 | orchestrator | changed: [testbed-manager] 2026-02-27 00:50:29.495858 | orchestrator | 2026-02-27 00:50:29.495869 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-02-27 00:50:29.495880 | orchestrator | Friday 27 February 2026 00:49:43 +0000 (0:00:01.466) 0:00:58.445 ******* 2026-02-27 00:50:29.495890 | orchestrator | ok: [testbed-manager] 2026-02-27 00:50:29.495901 | orchestrator | 2026-02-27 00:50:29.495912 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 00:50:29.495923 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 00:50:29.495934 | orchestrator | 2026-02-27 00:50:29.495944 | orchestrator | 2026-02-27 00:50:29.495955 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 00:50:29.495966 | orchestrator | Friday 27 February 2026 00:49:44 +0000 (0:00:01.247) 0:00:59.693 ******* 2026-02-27 00:50:29.495976 | orchestrator | =============================================================================== 2026-02-27 00:50:29.495987 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 37.51s 2026-02-27 00:50:29.495998 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 3.65s 2026-02-27 00:50:29.496008 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 3.60s 2026-02-27 00:50:29.496070 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 3.45s 2026-02-27 00:50:29.496083 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.77s 2026-02-27 00:50:29.496094 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.69s 2026-02-27 00:50:29.496105 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.47s 2026-02-27 00:50:29.496116 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 1.25s 2026-02-27 00:50:29.496126 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 1.25s 2026-02-27 00:50:29.496137 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.06s 2026-02-27 00:50:29.496148 | orchestrator | 2026-02-27 00:50:29.496159 | orchestrator | 2026-02-27 00:50:29.496169 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-02-27 00:50:29.496180 | orchestrator | 2026-02-27 00:50:29.496191 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-02-27 00:50:29.496209 | orchestrator | Friday 27 February 2026 00:49:10 +0000 (0:00:00.364) 0:00:00.364 ******* 2026-02-27 00:50:29.496220 | orchestrator | ok: [testbed-manager] 2026-02-27 00:50:29.496231 | orchestrator | 2026-02-27 00:50:29.496242 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-02-27 00:50:29.496252 | orchestrator | Friday 27 February 2026 00:49:12 +0000 (0:00:01.663) 0:00:02.028 ******* 2026-02-27 00:50:29.496263 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-02-27 00:50:29.496274 | orchestrator | 2026-02-27 00:50:29.496284 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-02-27 00:50:29.496295 | orchestrator | Friday 27 February 2026 00:49:13 +0000 (0:00:00.704) 0:00:02.733 ******* 2026-02-27 00:50:29.496327 | orchestrator | changed: [testbed-manager] 2026-02-27 00:50:29.496338 | orchestrator | 2026-02-27 00:50:29.496349 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-02-27 00:50:29.496360 | orchestrator | Friday 27 February 2026 00:49:14 +0000 (0:00:01.701) 0:00:04.435 ******* 2026-02-27 00:50:29.496371 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-02-27 00:50:29.496381 | orchestrator | ok: [testbed-manager] 2026-02-27 00:50:29.496392 | orchestrator | 2026-02-27 00:50:29.496403 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-02-27 00:50:29.496414 | orchestrator | Friday 27 February 2026 00:50:23 +0000 (0:01:08.950) 0:01:13.385 ******* 2026-02-27 00:50:29.496430 | orchestrator | changed: [testbed-manager] 2026-02-27 00:50:29.496441 | orchestrator | 2026-02-27 00:50:29.496452 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 00:50:29.496463 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 00:50:29.496477 | orchestrator | 2026-02-27 00:50:29.496496 | orchestrator | 2026-02-27 00:50:29.496513 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 00:50:29.496542 | orchestrator | Friday 27 February 2026 00:50:27 +0000 (0:00:03.818) 0:01:17.204 ******* 2026-02-27 00:50:29.496560 | orchestrator | =============================================================================== 2026-02-27 00:50:29.496578 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 68.95s 2026-02-27 00:50:29.496595 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.82s 2026-02-27 00:50:29.496613 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.70s 2026-02-27 00:50:29.496624 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.66s 2026-02-27 00:50:29.496635 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.71s 2026-02-27 00:50:29.496646 | orchestrator | 2026-02-27 00:50:29 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state STARTED 2026-02-27 00:50:29.496658 | orchestrator | 2026-02-27 00:50:29 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:50:29.496668 | orchestrator | 2026-02-27 00:50:29 | INFO  | Task 8b5e28aa-eb91-4aa7-ab1d-5f9dd6958676 is in state SUCCESS 2026-02-27 00:50:29.496679 | orchestrator | 2026-02-27 00:50:29 | INFO  | Task 45ebd985-43df-4a5a-8b3a-02bd26beb037 is in state STARTED 2026-02-27 00:50:29.496690 | orchestrator | 2026-02-27 00:50:29 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:50:29.496701 | orchestrator | 2026-02-27 00:50:29 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:50:32.542502 | orchestrator | 2026-02-27 00:50:32 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state STARTED 2026-02-27 00:50:32.542698 | orchestrator | 2026-02-27 00:50:32 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:50:32.548456 | orchestrator | 2026-02-27 00:50:32 | INFO  | Task 45ebd985-43df-4a5a-8b3a-02bd26beb037 is in state STARTED 2026-02-27 00:50:32.548575 | orchestrator | 2026-02-27 00:50:32 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:50:32.548592 | orchestrator | 2026-02-27 00:50:32 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:50:35.610602 | orchestrator | 2026-02-27 00:50:35 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state STARTED 2026-02-27 00:50:35.611498 | orchestrator | 2026-02-27 00:50:35 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:50:35.613366 | orchestrator | 2026-02-27 00:50:35 | INFO  | Task 45ebd985-43df-4a5a-8b3a-02bd26beb037 is in state STARTED 2026-02-27 00:50:35.615872 | orchestrator | 2026-02-27 00:50:35 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:50:35.615930 | orchestrator | 2026-02-27 00:50:35 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:50:38.656658 | orchestrator | 2026-02-27 00:50:38 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state STARTED 2026-02-27 00:50:38.658591 | orchestrator | 2026-02-27 00:50:38 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:50:38.661223 | orchestrator | 2026-02-27 00:50:38 | INFO  | Task 45ebd985-43df-4a5a-8b3a-02bd26beb037 is in state STARTED 2026-02-27 00:50:38.662732 | orchestrator | 2026-02-27 00:50:38 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:50:38.662777 | orchestrator | 2026-02-27 00:50:38 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:50:41.707122 | orchestrator | 2026-02-27 00:50:41 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state STARTED 2026-02-27 00:50:41.709008 | orchestrator | 2026-02-27 00:50:41 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:50:41.709767 | orchestrator | 2026-02-27 00:50:41 | INFO  | Task 45ebd985-43df-4a5a-8b3a-02bd26beb037 is in state STARTED 2026-02-27 00:50:41.710803 | orchestrator | 2026-02-27 00:50:41 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:50:41.710876 | orchestrator | 2026-02-27 00:50:41 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:50:44.780036 | orchestrator | 2026-02-27 00:50:44 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state STARTED 2026-02-27 00:50:44.782573 | orchestrator | 2026-02-27 00:50:44 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:50:44.784442 | orchestrator | 2026-02-27 00:50:44 | INFO  | Task 45ebd985-43df-4a5a-8b3a-02bd26beb037 is in state SUCCESS 2026-02-27 00:50:44.785384 | orchestrator | 2026-02-27 00:50:44.785418 | orchestrator | 2026-02-27 00:50:44.785423 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-27 00:50:44.785428 | orchestrator | 2026-02-27 00:50:44.785432 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-27 00:50:44.785437 | orchestrator | Friday 27 February 2026 00:48:48 +0000 (0:00:00.819) 0:00:00.819 ******* 2026-02-27 00:50:44.785441 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-02-27 00:50:44.785445 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-02-27 00:50:44.785449 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-02-27 00:50:44.785453 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-02-27 00:50:44.785457 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-02-27 00:50:44.785461 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-02-27 00:50:44.785465 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-02-27 00:50:44.785468 | orchestrator | 2026-02-27 00:50:44.785485 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-02-27 00:50:44.785489 | orchestrator | 2026-02-27 00:50:44.785493 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-02-27 00:50:44.785496 | orchestrator | Friday 27 February 2026 00:48:49 +0000 (0:00:01.284) 0:00:02.104 ******* 2026-02-27 00:50:44.785509 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-27 00:50:44.785518 | orchestrator | 2026-02-27 00:50:44.785524 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-02-27 00:50:44.785530 | orchestrator | Friday 27 February 2026 00:48:52 +0000 (0:00:03.492) 0:00:05.596 ******* 2026-02-27 00:50:44.785536 | orchestrator | ok: [testbed-manager] 2026-02-27 00:50:44.785544 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:50:44.785550 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:50:44.785555 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:50:44.785561 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:50:44.785567 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:50:44.785573 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:50:44.785579 | orchestrator | 2026-02-27 00:50:44.785586 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-02-27 00:50:44.785591 | orchestrator | Friday 27 February 2026 00:48:56 +0000 (0:00:03.269) 0:00:08.866 ******* 2026-02-27 00:50:44.785595 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:50:44.785599 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:50:44.785602 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:50:44.785607 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:50:44.785610 | orchestrator | ok: [testbed-manager] 2026-02-27 00:50:44.785615 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:50:44.785619 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:50:44.785622 | orchestrator | 2026-02-27 00:50:44.785626 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-02-27 00:50:44.785630 | orchestrator | Friday 27 February 2026 00:49:00 +0000 (0:00:03.780) 0:00:12.647 ******* 2026-02-27 00:50:44.785634 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:50:44.785638 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:50:44.785650 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:50:44.785654 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:50:44.785657 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:50:44.785661 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:50:44.785665 | orchestrator | changed: [testbed-manager] 2026-02-27 00:50:44.785669 | orchestrator | 2026-02-27 00:50:44.785672 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-02-27 00:50:44.785676 | orchestrator | Friday 27 February 2026 00:49:03 +0000 (0:00:03.312) 0:00:15.959 ******* 2026-02-27 00:50:44.785680 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:50:44.785684 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:50:44.785687 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:50:44.785691 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:50:44.785695 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:50:44.785698 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:50:44.785702 | orchestrator | changed: [testbed-manager] 2026-02-27 00:50:44.785706 | orchestrator | 2026-02-27 00:50:44.785709 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-02-27 00:50:44.785713 | orchestrator | Friday 27 February 2026 00:49:22 +0000 (0:00:19.355) 0:00:35.315 ******* 2026-02-27 00:50:44.785717 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:50:44.785721 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:50:44.785724 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:50:44.785728 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:50:44.785732 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:50:44.785735 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:50:44.785739 | orchestrator | changed: [testbed-manager] 2026-02-27 00:50:44.785749 | orchestrator | 2026-02-27 00:50:44.785753 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-02-27 00:50:44.785757 | orchestrator | Friday 27 February 2026 00:50:08 +0000 (0:00:46.118) 0:01:21.433 ******* 2026-02-27 00:50:44.785761 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-27 00:50:44.785767 | orchestrator | 2026-02-27 00:50:44.785770 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-02-27 00:50:44.785774 | orchestrator | Friday 27 February 2026 00:50:10 +0000 (0:00:01.439) 0:01:22.873 ******* 2026-02-27 00:50:44.785778 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-02-27 00:50:44.785785 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-02-27 00:50:44.785789 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-02-27 00:50:44.785793 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-02-27 00:50:44.785805 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-02-27 00:50:44.785809 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-02-27 00:50:44.785812 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-02-27 00:50:44.785816 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-02-27 00:50:44.785820 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-02-27 00:50:44.785824 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-02-27 00:50:44.785827 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-02-27 00:50:44.785831 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-02-27 00:50:44.785835 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-02-27 00:50:44.785838 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-02-27 00:50:44.785842 | orchestrator | 2026-02-27 00:50:44.785846 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-02-27 00:50:44.785850 | orchestrator | Friday 27 February 2026 00:50:16 +0000 (0:00:05.759) 0:01:28.632 ******* 2026-02-27 00:50:44.785854 | orchestrator | ok: [testbed-manager] 2026-02-27 00:50:44.785858 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:50:44.785862 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:50:44.785865 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:50:44.785869 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:50:44.785873 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:50:44.785876 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:50:44.785880 | orchestrator | 2026-02-27 00:50:44.785884 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-02-27 00:50:44.785888 | orchestrator | Friday 27 February 2026 00:50:17 +0000 (0:00:01.424) 0:01:30.057 ******* 2026-02-27 00:50:44.785891 | orchestrator | changed: [testbed-manager] 2026-02-27 00:50:44.785895 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:50:44.785899 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:50:44.785903 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:50:44.785906 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:50:44.785910 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:50:44.785914 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:50:44.785917 | orchestrator | 2026-02-27 00:50:44.785921 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-02-27 00:50:44.785925 | orchestrator | Friday 27 February 2026 00:50:19 +0000 (0:00:02.212) 0:01:32.269 ******* 2026-02-27 00:50:44.785929 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:50:44.785932 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:50:44.785936 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:50:44.785940 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:50:44.785943 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:50:44.785947 | orchestrator | ok: [testbed-manager] 2026-02-27 00:50:44.785951 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:50:44.785958 | orchestrator | 2026-02-27 00:50:44.785962 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-02-27 00:50:44.785966 | orchestrator | Friday 27 February 2026 00:50:21 +0000 (0:00:01.800) 0:01:34.070 ******* 2026-02-27 00:50:44.785970 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:50:44.785973 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:50:44.785977 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:50:44.785981 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:50:44.785984 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:50:44.785988 | orchestrator | ok: [testbed-manager] 2026-02-27 00:50:44.785992 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:50:44.785995 | orchestrator | 2026-02-27 00:50:44.785999 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-02-27 00:50:44.786003 | orchestrator | Friday 27 February 2026 00:50:24 +0000 (0:00:03.042) 0:01:37.112 ******* 2026-02-27 00:50:44.786007 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-02-27 00:50:44.786055 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-27 00:50:44.786062 | orchestrator | 2026-02-27 00:50:44.786066 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-02-27 00:50:44.786070 | orchestrator | Friday 27 February 2026 00:50:26 +0000 (0:00:01.884) 0:01:38.997 ******* 2026-02-27 00:50:44.786075 | orchestrator | changed: [testbed-manager] 2026-02-27 00:50:44.786081 | orchestrator | 2026-02-27 00:50:44.786087 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-02-27 00:50:44.786094 | orchestrator | Friday 27 February 2026 00:50:31 +0000 (0:00:05.066) 0:01:44.064 ******* 2026-02-27 00:50:44.786099 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:50:44.786105 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:50:44.786111 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:50:44.786118 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:50:44.786124 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:50:44.786131 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:50:44.786137 | orchestrator | changed: [testbed-manager] 2026-02-27 00:50:44.786144 | orchestrator | 2026-02-27 00:50:44.786147 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 00:50:44.786151 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 00:50:44.786157 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 00:50:44.786161 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 00:50:44.786171 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 00:50:44.786182 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 00:50:44.786188 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 00:50:44.786194 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 00:50:44.786200 | orchestrator | 2026-02-27 00:50:44.786206 | orchestrator | 2026-02-27 00:50:44.786212 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 00:50:44.786218 | orchestrator | Friday 27 February 2026 00:50:43 +0000 (0:00:12.383) 0:01:56.447 ******* 2026-02-27 00:50:44.786224 | orchestrator | =============================================================================== 2026-02-27 00:50:44.786235 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 46.12s 2026-02-27 00:50:44.786239 | orchestrator | osism.services.netdata : Add repository -------------------------------- 19.36s 2026-02-27 00:50:44.786243 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 12.38s 2026-02-27 00:50:44.786247 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 5.76s 2026-02-27 00:50:44.786252 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 5.07s 2026-02-27 00:50:44.786258 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.78s 2026-02-27 00:50:44.786264 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 3.49s 2026-02-27 00:50:44.786270 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 3.31s 2026-02-27 00:50:44.786276 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 3.27s 2026-02-27 00:50:44.786282 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 3.04s 2026-02-27 00:50:44.786310 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 2.21s 2026-02-27 00:50:44.786318 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.89s 2026-02-27 00:50:44.786322 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.80s 2026-02-27 00:50:44.786325 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.44s 2026-02-27 00:50:44.786329 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.42s 2026-02-27 00:50:44.786333 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.28s 2026-02-27 00:50:44.789307 | orchestrator | 2026-02-27 00:50:44 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:50:44.789345 | orchestrator | 2026-02-27 00:50:44 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:50:47.849997 | orchestrator | 2026-02-27 00:50:47 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state STARTED 2026-02-27 00:50:47.852473 | orchestrator | 2026-02-27 00:50:47 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:50:47.853562 | orchestrator | 2026-02-27 00:50:47 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:50:47.853596 | orchestrator | 2026-02-27 00:50:47 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:50:50.938554 | orchestrator | 2026-02-27 00:50:50 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state STARTED 2026-02-27 00:50:50.941131 | orchestrator | 2026-02-27 00:50:50 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:50:50.942585 | orchestrator | 2026-02-27 00:50:50 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:50:50.943057 | orchestrator | 2026-02-27 00:50:50 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:50:53.994421 | orchestrator | 2026-02-27 00:50:53 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state STARTED 2026-02-27 00:50:53.996676 | orchestrator | 2026-02-27 00:50:53 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:50:53.997803 | orchestrator | 2026-02-27 00:50:53 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:50:53.998111 | orchestrator | 2026-02-27 00:50:54 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:50:57.052172 | orchestrator | 2026-02-27 00:50:57 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state STARTED 2026-02-27 00:50:57.053412 | orchestrator | 2026-02-27 00:50:57 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:50:57.054156 | orchestrator | 2026-02-27 00:50:57 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:50:57.054218 | orchestrator | 2026-02-27 00:50:57 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:51:00.190549 | orchestrator | 2026-02-27 00:51:00 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state STARTED 2026-02-27 00:51:00.190614 | orchestrator | 2026-02-27 00:51:00 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:51:00.190621 | orchestrator | 2026-02-27 00:51:00 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:51:00.190625 | orchestrator | 2026-02-27 00:51:00 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:51:03.161877 | orchestrator | 2026-02-27 00:51:03 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state STARTED 2026-02-27 00:51:03.163692 | orchestrator | 2026-02-27 00:51:03 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:51:03.165909 | orchestrator | 2026-02-27 00:51:03 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:51:03.166131 | orchestrator | 2026-02-27 00:51:03 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:51:06.216607 | orchestrator | 2026-02-27 00:51:06 | INFO  | Task b5939a3c-5710-4651-8aab-d38b1d28b70d is in state SUCCESS 2026-02-27 00:51:06.218491 | orchestrator | 2026-02-27 00:51:06.218631 | orchestrator | 2026-02-27 00:51:06.218644 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-02-27 00:51:06.218652 | orchestrator | 2026-02-27 00:51:06.218659 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-27 00:51:06.218666 | orchestrator | Friday 27 February 2026 00:48:34 +0000 (0:00:00.438) 0:00:00.438 ******* 2026-02-27 00:51:06.218674 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-27 00:51:06.218682 | orchestrator | 2026-02-27 00:51:06.218688 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-02-27 00:51:06.218695 | orchestrator | Friday 27 February 2026 00:48:36 +0000 (0:00:01.846) 0:00:02.284 ******* 2026-02-27 00:51:06.218701 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-27 00:51:06.218707 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-27 00:51:06.218713 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-27 00:51:06.218719 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-27 00:51:06.218726 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-27 00:51:06.218732 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-27 00:51:06.218739 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-27 00:51:06.218745 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-27 00:51:06.218751 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-27 00:51:06.218757 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-27 00:51:06.218763 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-27 00:51:06.218769 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-27 00:51:06.218776 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-27 00:51:06.218782 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-27 00:51:06.218804 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-27 00:51:06.218811 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-27 00:51:06.218817 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-27 00:51:06.218823 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-27 00:51:06.218829 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-27 00:51:06.218835 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-27 00:51:06.218841 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-27 00:51:06.218848 | orchestrator | 2026-02-27 00:51:06.218854 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-27 00:51:06.218860 | orchestrator | Friday 27 February 2026 00:48:41 +0000 (0:00:05.340) 0:00:07.624 ******* 2026-02-27 00:51:06.218866 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-3, testbed-node-2, testbed-node-4, testbed-node-5 2026-02-27 00:51:06.218873 | orchestrator | 2026-02-27 00:51:06.218880 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-02-27 00:51:06.218886 | orchestrator | Friday 27 February 2026 00:48:43 +0000 (0:00:01.710) 0:00:09.335 ******* 2026-02-27 00:51:06.218901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-27 00:51:06.218910 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-27 00:51:06.218929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-27 00:51:06.218936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-27 00:51:06.218943 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-27 00:51:06.218955 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-27 00:51:06.218962 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-27 00:51:06.218969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:51:06.218980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:51:06.218992 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:51:06.218999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:51:06.219005 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:51:06.219015 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:51:06.219022 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:51:06.219084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:51:06.219100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:51:06.219118 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:51:06.219125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:51:06.219153 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:51:06.219170 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:51:06.219177 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:51:06.219185 | orchestrator | 2026-02-27 00:51:06.219192 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-02-27 00:51:06.219200 | orchestrator | Friday 27 February 2026 00:48:49 +0000 (0:00:05.485) 0:00:14.821 ******* 2026-02-27 00:51:06.219208 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-27 00:51:06.219219 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:51:06.219227 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:51:06.219239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-27 00:51:06.219247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:51:06.219259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:51:06.219266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-27 00:51:06.219299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:51:06.219307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:51:06.219314 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:51:06.219326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-27 00:51:06.219334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:51:06.219352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:51:06.219369 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:51:06.219380 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-27 00:51:06.219391 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:51:06.219402 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:51:06.219413 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-27 00:51:06.219427 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:51:06.219438 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:51:06.219449 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:51:06.219473 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:51:06.219485 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:51:06.219495 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:51:06.219513 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-27 00:51:06.219532 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:51:06.219544 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:51:06.219554 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:51:06.219562 | orchestrator | 2026-02-27 00:51:06.219568 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-02-27 00:51:06.219575 | orchestrator | Friday 27 February 2026 00:48:51 +0000 (0:00:02.225) 0:00:17.046 ******* 2026-02-27 00:51:06.219581 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-27 00:51:06.219588 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:51:06.219594 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:51:06.219604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-27 00:51:06.219615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:51:06.219627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:51:06.219633 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:51:06.219639 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:51:06.219646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-27 00:51:06.219652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:51:06.219659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:51:06.219665 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:51:06.219671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-27 00:51:06.219682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:51:06.219688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:51:06.220020 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-27 00:51:06.220037 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:51:06.220045 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:51:06.220053 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:51:06.220060 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:51:06.220068 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-27 00:51:06.220075 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:51:06.220083 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:51:06.220090 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:51:06.220098 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-27 00:51:06.220116 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:51:06.220124 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:51:06.220132 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:51:06.220139 | orchestrator | 2026-02-27 00:51:06.220146 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-02-27 00:51:06.220154 | orchestrator | Friday 27 February 2026 00:48:54 +0000 (0:00:03.528) 0:00:20.574 ******* 2026-02-27 00:51:06.220161 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:51:06.220168 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:51:06.220176 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:51:06.220183 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:51:06.220190 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:51:06.220197 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:51:06.220204 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:51:06.220211 | orchestrator | 2026-02-27 00:51:06.220219 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-02-27 00:51:06.220226 | orchestrator | Friday 27 February 2026 00:48:56 +0000 (0:00:01.531) 0:00:22.106 ******* 2026-02-27 00:51:06.220233 | orchestrator | skipping: [testbed-manager] 2026-02-27 00:51:06.220240 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:51:06.220248 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:51:06.220255 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:51:06.220268 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:51:06.220334 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:51:06.220341 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:51:06.220348 | orchestrator | 2026-02-27 00:51:06.220356 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-02-27 00:51:06.220363 | orchestrator | Friday 27 February 2026 00:48:58 +0000 (0:00:01.827) 0:00:23.934 ******* 2026-02-27 00:51:06.220370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-27 00:51:06.220378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-27 00:51:06.220395 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-27 00:51:06.220403 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-27 00:51:06.220416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:51:06.220424 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-27 00:51:06.220432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:51:06.220439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-27 00:51:06.220447 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-27 00:51:06.220467 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:51:06.220475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:51:06.220486 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:51:06.220494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:51:06.220501 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:51:06.220509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:51:06.220516 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:51:06.220529 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:51:06.220540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:51:06.220548 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:51:06.220564 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:51:06.220572 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:51:06.220580 | orchestrator | 2026-02-27 00:51:06.220587 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-02-27 00:51:06.220594 | orchestrator | Friday 27 February 2026 00:49:09 +0000 (0:00:11.247) 0:00:35.181 ******* 2026-02-27 00:51:06.220601 | orchestrator | [WARNING]: Skipped 2026-02-27 00:51:06.220609 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-02-27 00:51:06.220617 | orchestrator | to this access issue: 2026-02-27 00:51:06.220625 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-02-27 00:51:06.220633 | orchestrator | directory 2026-02-27 00:51:06.220642 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-27 00:51:06.220651 | orchestrator | 2026-02-27 00:51:06.220659 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-02-27 00:51:06.220667 | orchestrator | Friday 27 February 2026 00:49:12 +0000 (0:00:02.770) 0:00:37.952 ******* 2026-02-27 00:51:06.220676 | orchestrator | [WARNING]: Skipped 2026-02-27 00:51:06.220685 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-02-27 00:51:06.220693 | orchestrator | to this access issue: 2026-02-27 00:51:06.220701 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-02-27 00:51:06.220715 | orchestrator | directory 2026-02-27 00:51:06.220724 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-27 00:51:06.220732 | orchestrator | 2026-02-27 00:51:06.220741 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-02-27 00:51:06.220748 | orchestrator | Friday 27 February 2026 00:49:13 +0000 (0:00:01.138) 0:00:39.090 ******* 2026-02-27 00:51:06.220755 | orchestrator | [WARNING]: Skipped 2026-02-27 00:51:06.220762 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-02-27 00:51:06.220769 | orchestrator | to this access issue: 2026-02-27 00:51:06.220776 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-02-27 00:51:06.220783 | orchestrator | directory 2026-02-27 00:51:06.220790 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-27 00:51:06.220798 | orchestrator | 2026-02-27 00:51:06.220805 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-02-27 00:51:06.220812 | orchestrator | Friday 27 February 2026 00:49:14 +0000 (0:00:01.192) 0:00:40.283 ******* 2026-02-27 00:51:06.220819 | orchestrator | [WARNING]: Skipped 2026-02-27 00:51:06.220826 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-02-27 00:51:06.220834 | orchestrator | to this access issue: 2026-02-27 00:51:06.220841 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-02-27 00:51:06.220848 | orchestrator | directory 2026-02-27 00:51:06.220855 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-27 00:51:06.220888 | orchestrator | 2026-02-27 00:51:06.220896 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-02-27 00:51:06.220903 | orchestrator | Friday 27 February 2026 00:49:15 +0000 (0:00:01.212) 0:00:41.495 ******* 2026-02-27 00:51:06.220911 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:51:06.220918 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:51:06.220925 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:51:06.220932 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:51:06.220939 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:51:06.220946 | orchestrator | changed: [testbed-manager] 2026-02-27 00:51:06.220953 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:51:06.220960 | orchestrator | 2026-02-27 00:51:06.220968 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-02-27 00:51:06.220975 | orchestrator | Friday 27 February 2026 00:49:22 +0000 (0:00:06.692) 0:00:48.187 ******* 2026-02-27 00:51:06.220986 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-27 00:51:06.220994 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-27 00:51:06.221001 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-27 00:51:06.221008 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-27 00:51:06.221015 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-27 00:51:06.221022 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-27 00:51:06.221029 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-27 00:51:06.221036 | orchestrator | 2026-02-27 00:51:06.221043 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-02-27 00:51:06.221051 | orchestrator | Friday 27 February 2026 00:49:28 +0000 (0:00:05.966) 0:00:54.154 ******* 2026-02-27 00:51:06.221058 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:51:06.221066 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:51:06.221073 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:51:06.221080 | orchestrator | changed: [testbed-manager] 2026-02-27 00:51:06.221096 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:51:06.221104 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:51:06.221111 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:51:06.221118 | orchestrator | 2026-02-27 00:51:06.221125 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-02-27 00:51:06.221132 | orchestrator | Friday 27 February 2026 00:49:33 +0000 (0:00:04.926) 0:00:59.081 ******* 2026-02-27 00:51:06.221140 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-27 00:51:06.221148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:51:06.221155 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-27 00:51:06.221163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:51:06.221172 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:51:06.221185 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-27 00:51:06.221196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:51:06.221209 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:51:06.221217 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:51:06.221224 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-27 00:51:06.221232 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:51:06.221240 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-27 00:51:06.221251 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:51:06.221259 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:51:06.221324 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-27 00:51:06.221335 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:51:06.221342 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:51:06.221350 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-27 00:51:06.221358 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:51:06.221365 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:51:06.221376 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:51:06.221384 | orchestrator | 2026-02-27 00:51:06.221392 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-02-27 00:51:06.221404 | orchestrator | Friday 27 February 2026 00:49:37 +0000 (0:00:04.122) 0:01:03.203 ******* 2026-02-27 00:51:06.221412 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-27 00:51:06.221419 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-27 00:51:06.221426 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-27 00:51:06.221434 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-27 00:51:06.221441 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-27 00:51:06.221447 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-27 00:51:06.221455 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-27 00:51:06.221462 | orchestrator | 2026-02-27 00:51:06.221474 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-02-27 00:51:06.221481 | orchestrator | Friday 27 February 2026 00:49:41 +0000 (0:00:04.229) 0:01:07.433 ******* 2026-02-27 00:51:06.221488 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-27 00:51:06.221502 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-27 00:51:06.221514 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-27 00:51:06.221525 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-27 00:51:06.221537 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-27 00:51:06.221550 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-27 00:51:06.221561 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-27 00:51:06.221574 | orchestrator | 2026-02-27 00:51:06.221584 | orchestrator | TASK [common : Check common containers] **************************************** 2026-02-27 00:51:06.221592 | orchestrator | Friday 27 February 2026 00:49:44 +0000 (0:00:03.072) 0:01:10.506 ******* 2026-02-27 00:51:06.221600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-27 00:51:06.221608 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-27 00:51:06.221615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-27 00:51:06.221628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:51:06.221636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-27 00:51:06.221649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:51:06.221657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:51:06.221665 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-27 00:51:06.221672 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-27 00:51:06.221680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:51:06.221699 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-27 00:51:06.221711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:51:06.221719 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:51:06.221731 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:51:06.221739 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:51:06.221746 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:51:06.221754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:51:06.221766 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:51:06.221778 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:51:06.221787 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:51:06.221796 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:51:06.221805 | orchestrator | 2026-02-27 00:51:06.221814 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-02-27 00:51:06.221822 | orchestrator | Friday 27 February 2026 00:49:48 +0000 (0:00:04.174) 0:01:14.680 ******* 2026-02-27 00:51:06.221836 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:51:06.221845 | orchestrator | changed: [testbed-manager] 2026-02-27 00:51:06.221853 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:51:06.221862 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:51:06.221870 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:51:06.221879 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:51:06.221887 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:51:06.221896 | orchestrator | 2026-02-27 00:51:06.221905 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-02-27 00:51:06.221914 | orchestrator | Friday 27 February 2026 00:49:51 +0000 (0:00:02.174) 0:01:16.855 ******* 2026-02-27 00:51:06.221923 | orchestrator | changed: [testbed-manager] 2026-02-27 00:51:06.221931 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:51:06.221940 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:51:06.221948 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:51:06.221957 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:51:06.221966 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:51:06.221974 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:51:06.221982 | orchestrator | 2026-02-27 00:51:06.221991 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-27 00:51:06.221999 | orchestrator | Friday 27 February 2026 00:49:52 +0000 (0:00:01.468) 0:01:18.323 ******* 2026-02-27 00:51:06.222008 | orchestrator | 2026-02-27 00:51:06.222062 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-27 00:51:06.222074 | orchestrator | Friday 27 February 2026 00:49:52 +0000 (0:00:00.073) 0:01:18.397 ******* 2026-02-27 00:51:06.222083 | orchestrator | 2026-02-27 00:51:06.222091 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-27 00:51:06.222100 | orchestrator | Friday 27 February 2026 00:49:52 +0000 (0:00:00.065) 0:01:18.462 ******* 2026-02-27 00:51:06.222114 | orchestrator | 2026-02-27 00:51:06.222123 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-27 00:51:06.222132 | orchestrator | Friday 27 February 2026 00:49:52 +0000 (0:00:00.064) 0:01:18.527 ******* 2026-02-27 00:51:06.222140 | orchestrator | 2026-02-27 00:51:06.222149 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-27 00:51:06.222157 | orchestrator | Friday 27 February 2026 00:49:53 +0000 (0:00:00.242) 0:01:18.769 ******* 2026-02-27 00:51:06.222166 | orchestrator | 2026-02-27 00:51:06.222174 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-27 00:51:06.222183 | orchestrator | Friday 27 February 2026 00:49:53 +0000 (0:00:00.066) 0:01:18.836 ******* 2026-02-27 00:51:06.222192 | orchestrator | 2026-02-27 00:51:06.222200 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-27 00:51:06.222209 | orchestrator | Friday 27 February 2026 00:49:53 +0000 (0:00:00.066) 0:01:18.902 ******* 2026-02-27 00:51:06.222217 | orchestrator | 2026-02-27 00:51:06.222226 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-02-27 00:51:06.222235 | orchestrator | Friday 27 February 2026 00:49:53 +0000 (0:00:00.087) 0:01:18.989 ******* 2026-02-27 00:51:06.222243 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:51:06.222252 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:51:06.222261 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:51:06.222283 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:51:06.222292 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:51:06.222301 | orchestrator | changed: [testbed-manager] 2026-02-27 00:51:06.222310 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:51:06.222318 | orchestrator | 2026-02-27 00:51:06.222327 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-02-27 00:51:06.222336 | orchestrator | Friday 27 February 2026 00:50:24 +0000 (0:00:30.932) 0:01:49.921 ******* 2026-02-27 00:51:06.222344 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:51:06.222353 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:51:06.222361 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:51:06.222370 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:51:06.222378 | orchestrator | changed: [testbed-manager] 2026-02-27 00:51:06.222387 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:51:06.222395 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:51:06.222404 | orchestrator | 2026-02-27 00:51:06.222412 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-02-27 00:51:06.222421 | orchestrator | Friday 27 February 2026 00:50:56 +0000 (0:00:32.722) 0:02:22.644 ******* 2026-02-27 00:51:06.222429 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:51:06.222438 | orchestrator | ok: [testbed-manager] 2026-02-27 00:51:06.222447 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:51:06.222455 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:51:06.222464 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:51:06.222472 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:51:06.222486 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:51:06.222495 | orchestrator | 2026-02-27 00:51:06.222504 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-02-27 00:51:06.222513 | orchestrator | Friday 27 February 2026 00:50:59 +0000 (0:00:02.250) 0:02:24.895 ******* 2026-02-27 00:51:06.222522 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:51:06.222530 | orchestrator | changed: [testbed-manager] 2026-02-27 00:51:06.222539 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:51:06.222548 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:51:06.222556 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:51:06.222564 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:51:06.222573 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:51:06.222582 | orchestrator | 2026-02-27 00:51:06.222590 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 00:51:06.222600 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-27 00:51:06.222616 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-27 00:51:06.222625 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-27 00:51:06.222664 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-27 00:51:06.222675 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-27 00:51:06.222684 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-27 00:51:06.222693 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-27 00:51:06.222701 | orchestrator | 2026-02-27 00:51:06.222710 | orchestrator | 2026-02-27 00:51:06.222720 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 00:51:06.222728 | orchestrator | Friday 27 February 2026 00:51:04 +0000 (0:00:05.520) 0:02:30.416 ******* 2026-02-27 00:51:06.222737 | orchestrator | =============================================================================== 2026-02-27 00:51:06.222745 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 32.72s 2026-02-27 00:51:06.222754 | orchestrator | common : Restart fluentd container ------------------------------------- 30.93s 2026-02-27 00:51:06.222763 | orchestrator | common : Copying over config.json files for services ------------------- 11.25s 2026-02-27 00:51:06.222772 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 6.69s 2026-02-27 00:51:06.222781 | orchestrator | common : Copying over cron logrotate config file ------------------------ 5.97s 2026-02-27 00:51:06.222790 | orchestrator | common : Restart cron container ----------------------------------------- 5.52s 2026-02-27 00:51:06.222798 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.49s 2026-02-27 00:51:06.222807 | orchestrator | common : Ensuring config directories exist ------------------------------ 5.34s 2026-02-27 00:51:06.222816 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 4.93s 2026-02-27 00:51:06.222824 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 4.23s 2026-02-27 00:51:06.222833 | orchestrator | common : Check common containers ---------------------------------------- 4.17s 2026-02-27 00:51:06.222841 | orchestrator | common : Ensuring config directories have correct owner and permission --- 4.12s 2026-02-27 00:51:06.222850 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.53s 2026-02-27 00:51:06.222858 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.07s 2026-02-27 00:51:06.222867 | orchestrator | common : Find custom fluentd input config files ------------------------- 2.77s 2026-02-27 00:51:06.222875 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.25s 2026-02-27 00:51:06.222884 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.23s 2026-02-27 00:51:06.222892 | orchestrator | common : Creating log volume -------------------------------------------- 2.17s 2026-02-27 00:51:06.222901 | orchestrator | common : include_tasks -------------------------------------------------- 1.85s 2026-02-27 00:51:06.222909 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.83s 2026-02-27 00:51:06.222918 | orchestrator | 2026-02-27 00:51:06 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:51:06.223536 | orchestrator | 2026-02-27 00:51:06 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:51:06.224081 | orchestrator | 2026-02-27 00:51:06 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:51:09.269649 | orchestrator | 2026-02-27 00:51:09 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:51:09.269750 | orchestrator | 2026-02-27 00:51:09 | INFO  | Task aea4793b-952c-4d9d-be7f-59a791bec5cf is in state STARTED 2026-02-27 00:51:09.269787 | orchestrator | 2026-02-27 00:51:09 | INFO  | Task abf4550c-36c4-4d4e-b889-f5f669e03e36 is in state STARTED 2026-02-27 00:51:09.269800 | orchestrator | 2026-02-27 00:51:09 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:51:09.272261 | orchestrator | 2026-02-27 00:51:09 | INFO  | Task 512fc33d-5135-4d3a-a0fd-f1f371cd7395 is in state STARTED 2026-02-27 00:51:09.275083 | orchestrator | 2026-02-27 00:51:09 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:51:09.275137 | orchestrator | 2026-02-27 00:51:09 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:51:12.323624 | orchestrator | 2026-02-27 00:51:12 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:51:12.324401 | orchestrator | 2026-02-27 00:51:12 | INFO  | Task aea4793b-952c-4d9d-be7f-59a791bec5cf is in state STARTED 2026-02-27 00:51:12.326799 | orchestrator | 2026-02-27 00:51:12 | INFO  | Task abf4550c-36c4-4d4e-b889-f5f669e03e36 is in state STARTED 2026-02-27 00:51:12.327681 | orchestrator | 2026-02-27 00:51:12 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:51:12.328692 | orchestrator | 2026-02-27 00:51:12 | INFO  | Task 512fc33d-5135-4d3a-a0fd-f1f371cd7395 is in state STARTED 2026-02-27 00:51:12.332778 | orchestrator | 2026-02-27 00:51:12 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:51:12.332848 | orchestrator | 2026-02-27 00:51:12 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:51:15.366778 | orchestrator | 2026-02-27 00:51:15 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:51:15.369481 | orchestrator | 2026-02-27 00:51:15 | INFO  | Task aea4793b-952c-4d9d-be7f-59a791bec5cf is in state STARTED 2026-02-27 00:51:15.370330 | orchestrator | 2026-02-27 00:51:15 | INFO  | Task abf4550c-36c4-4d4e-b889-f5f669e03e36 is in state STARTED 2026-02-27 00:51:15.371131 | orchestrator | 2026-02-27 00:51:15 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:51:15.372050 | orchestrator | 2026-02-27 00:51:15 | INFO  | Task 512fc33d-5135-4d3a-a0fd-f1f371cd7395 is in state STARTED 2026-02-27 00:51:15.376158 | orchestrator | 2026-02-27 00:51:15 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:51:15.376222 | orchestrator | 2026-02-27 00:51:15 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:51:18.412646 | orchestrator | 2026-02-27 00:51:18 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:51:18.414254 | orchestrator | 2026-02-27 00:51:18 | INFO  | Task aea4793b-952c-4d9d-be7f-59a791bec5cf is in state STARTED 2026-02-27 00:51:18.415659 | orchestrator | 2026-02-27 00:51:18 | INFO  | Task abf4550c-36c4-4d4e-b889-f5f669e03e36 is in state STARTED 2026-02-27 00:51:18.417488 | orchestrator | 2026-02-27 00:51:18 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:51:18.419834 | orchestrator | 2026-02-27 00:51:18 | INFO  | Task 512fc33d-5135-4d3a-a0fd-f1f371cd7395 is in state STARTED 2026-02-27 00:51:18.421888 | orchestrator | 2026-02-27 00:51:18 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:51:18.421924 | orchestrator | 2026-02-27 00:51:18 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:51:21.471697 | orchestrator | 2026-02-27 00:51:21 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:51:21.472919 | orchestrator | 2026-02-27 00:51:21 | INFO  | Task aea4793b-952c-4d9d-be7f-59a791bec5cf is in state STARTED 2026-02-27 00:51:21.476445 | orchestrator | 2026-02-27 00:51:21 | INFO  | Task abf4550c-36c4-4d4e-b889-f5f669e03e36 is in state STARTED 2026-02-27 00:51:21.478858 | orchestrator | 2026-02-27 00:51:21 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:51:21.480710 | orchestrator | 2026-02-27 00:51:21 | INFO  | Task 512fc33d-5135-4d3a-a0fd-f1f371cd7395 is in state STARTED 2026-02-27 00:51:21.482649 | orchestrator | 2026-02-27 00:51:21 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:51:21.482717 | orchestrator | 2026-02-27 00:51:21 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:51:24.531772 | orchestrator | 2026-02-27 00:51:24 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:51:24.533145 | orchestrator | 2026-02-27 00:51:24 | INFO  | Task aea4793b-952c-4d9d-be7f-59a791bec5cf is in state STARTED 2026-02-27 00:51:24.534116 | orchestrator | 2026-02-27 00:51:24 | INFO  | Task abf4550c-36c4-4d4e-b889-f5f669e03e36 is in state STARTED 2026-02-27 00:51:24.534596 | orchestrator | 2026-02-27 00:51:24 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:51:24.536981 | orchestrator | 2026-02-27 00:51:24 | INFO  | Task 512fc33d-5135-4d3a-a0fd-f1f371cd7395 is in state STARTED 2026-02-27 00:51:24.537644 | orchestrator | 2026-02-27 00:51:24 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:51:24.537680 | orchestrator | 2026-02-27 00:51:24 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:51:27.669307 | orchestrator | 2026-02-27 00:51:27 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:51:27.669655 | orchestrator | 2026-02-27 00:51:27 | INFO  | Task aea4793b-952c-4d9d-be7f-59a791bec5cf is in state STARTED 2026-02-27 00:51:27.670186 | orchestrator | 2026-02-27 00:51:27 | INFO  | Task abf4550c-36c4-4d4e-b889-f5f669e03e36 is in state SUCCESS 2026-02-27 00:51:27.670967 | orchestrator | 2026-02-27 00:51:27 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:51:27.671864 | orchestrator | 2026-02-27 00:51:27 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:51:27.672667 | orchestrator | 2026-02-27 00:51:27 | INFO  | Task 512fc33d-5135-4d3a-a0fd-f1f371cd7395 is in state STARTED 2026-02-27 00:51:27.673693 | orchestrator | 2026-02-27 00:51:27 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:51:27.673739 | orchestrator | 2026-02-27 00:51:27 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:51:30.726647 | orchestrator | 2026-02-27 00:51:30 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:51:30.726829 | orchestrator | 2026-02-27 00:51:30 | INFO  | Task aea4793b-952c-4d9d-be7f-59a791bec5cf is in state STARTED 2026-02-27 00:51:30.727697 | orchestrator | 2026-02-27 00:51:30 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:51:30.728553 | orchestrator | 2026-02-27 00:51:30 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:51:30.729429 | orchestrator | 2026-02-27 00:51:30 | INFO  | Task 512fc33d-5135-4d3a-a0fd-f1f371cd7395 is in state STARTED 2026-02-27 00:51:30.730315 | orchestrator | 2026-02-27 00:51:30 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:51:30.730393 | orchestrator | 2026-02-27 00:51:30 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:51:33.797548 | orchestrator | 2026-02-27 00:51:33 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:51:33.798635 | orchestrator | 2026-02-27 00:51:33 | INFO  | Task aea4793b-952c-4d9d-be7f-59a791bec5cf is in state STARTED 2026-02-27 00:51:33.803119 | orchestrator | 2026-02-27 00:51:33 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:51:33.804483 | orchestrator | 2026-02-27 00:51:33 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:51:33.805382 | orchestrator | 2026-02-27 00:51:33 | INFO  | Task 512fc33d-5135-4d3a-a0fd-f1f371cd7395 is in state STARTED 2026-02-27 00:51:33.809979 | orchestrator | 2026-02-27 00:51:33 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:51:33.810134 | orchestrator | 2026-02-27 00:51:33 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:51:36.854529 | orchestrator | 2026-02-27 00:51:36 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:51:36.854635 | orchestrator | 2026-02-27 00:51:36 | INFO  | Task aea4793b-952c-4d9d-be7f-59a791bec5cf is in state STARTED 2026-02-27 00:51:36.855639 | orchestrator | 2026-02-27 00:51:36 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:51:36.856578 | orchestrator | 2026-02-27 00:51:36 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:51:36.857734 | orchestrator | 2026-02-27 00:51:36 | INFO  | Task 512fc33d-5135-4d3a-a0fd-f1f371cd7395 is in state STARTED 2026-02-27 00:51:36.858550 | orchestrator | 2026-02-27 00:51:36 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:51:36.858642 | orchestrator | 2026-02-27 00:51:36 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:51:39.897394 | orchestrator | 2026-02-27 00:51:39 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:51:39.898146 | orchestrator | 2026-02-27 00:51:39 | INFO  | Task aea4793b-952c-4d9d-be7f-59a791bec5cf is in state STARTED 2026-02-27 00:51:39.900747 | orchestrator | 2026-02-27 00:51:39 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:51:39.902605 | orchestrator | 2026-02-27 00:51:39 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:51:39.903681 | orchestrator | 2026-02-27 00:51:39 | INFO  | Task 512fc33d-5135-4d3a-a0fd-f1f371cd7395 is in state STARTED 2026-02-27 00:51:39.904754 | orchestrator | 2026-02-27 00:51:39 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:51:39.904886 | orchestrator | 2026-02-27 00:51:39 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:51:42.949681 | orchestrator | 2026-02-27 00:51:42 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:51:42.949783 | orchestrator | 2026-02-27 00:51:42 | INFO  | Task aea4793b-952c-4d9d-be7f-59a791bec5cf is in state STARTED 2026-02-27 00:51:42.952903 | orchestrator | 2026-02-27 00:51:42 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:51:42.993651 | orchestrator | 2026-02-27 00:51:42.993730 | orchestrator | 2026-02-27 00:51:42.993741 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-27 00:51:42.993751 | orchestrator | 2026-02-27 00:51:42.993759 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-27 00:51:42.993771 | orchestrator | Friday 27 February 2026 00:51:13 +0000 (0:00:00.438) 0:00:00.438 ******* 2026-02-27 00:51:42.993817 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:51:42.993835 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:51:42.993847 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:51:42.993860 | orchestrator | 2026-02-27 00:51:42.993872 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-27 00:51:42.993883 | orchestrator | Friday 27 February 2026 00:51:13 +0000 (0:00:00.447) 0:00:00.886 ******* 2026-02-27 00:51:42.993896 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-02-27 00:51:42.993909 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-02-27 00:51:42.993922 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-02-27 00:51:42.993935 | orchestrator | 2026-02-27 00:51:42.993948 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-02-27 00:51:42.993961 | orchestrator | 2026-02-27 00:51:42.993974 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-02-27 00:51:42.993983 | orchestrator | Friday 27 February 2026 00:51:14 +0000 (0:00:00.626) 0:00:01.513 ******* 2026-02-27 00:51:42.993991 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:51:42.994001 | orchestrator | 2026-02-27 00:51:42.994008 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-02-27 00:51:42.994066 | orchestrator | Friday 27 February 2026 00:51:15 +0000 (0:00:00.580) 0:00:02.093 ******* 2026-02-27 00:51:42.994075 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-02-27 00:51:42.994084 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-02-27 00:51:42.994091 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-02-27 00:51:42.994099 | orchestrator | 2026-02-27 00:51:42.994108 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-02-27 00:51:42.994116 | orchestrator | Friday 27 February 2026 00:51:16 +0000 (0:00:01.066) 0:00:03.159 ******* 2026-02-27 00:51:42.994124 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-02-27 00:51:42.994132 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-02-27 00:51:42.994140 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-02-27 00:51:42.994148 | orchestrator | 2026-02-27 00:51:42.994156 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-02-27 00:51:42.994164 | orchestrator | Friday 27 February 2026 00:51:18 +0000 (0:00:02.402) 0:00:05.561 ******* 2026-02-27 00:51:42.994172 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:51:42.994181 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:51:42.994189 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:51:42.994197 | orchestrator | 2026-02-27 00:51:42.994205 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-02-27 00:51:42.994213 | orchestrator | Friday 27 February 2026 00:51:20 +0000 (0:00:02.065) 0:00:07.627 ******* 2026-02-27 00:51:42.994221 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:51:42.994228 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:51:42.994236 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:51:42.994269 | orchestrator | 2026-02-27 00:51:42.994277 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 00:51:42.994285 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 00:51:42.994295 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 00:51:42.994303 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 00:51:42.994311 | orchestrator | 2026-02-27 00:51:42.994319 | orchestrator | 2026-02-27 00:51:42.994327 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 00:51:42.994335 | orchestrator | Friday 27 February 2026 00:51:24 +0000 (0:00:03.910) 0:00:11.537 ******* 2026-02-27 00:51:42.994353 | orchestrator | =============================================================================== 2026-02-27 00:51:42.994373 | orchestrator | memcached : Restart memcached container --------------------------------- 3.91s 2026-02-27 00:51:42.994382 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.40s 2026-02-27 00:51:42.994390 | orchestrator | memcached : Check memcached container ----------------------------------- 2.07s 2026-02-27 00:51:42.994397 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.07s 2026-02-27 00:51:42.994405 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.63s 2026-02-27 00:51:42.994413 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.58s 2026-02-27 00:51:42.994421 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.45s 2026-02-27 00:51:42.994429 | orchestrator | 2026-02-27 00:51:42.994436 | orchestrator | 2026-02-27 00:51:42.994444 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-27 00:51:42.994452 | orchestrator | 2026-02-27 00:51:42.994460 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-27 00:51:42.994468 | orchestrator | Friday 27 February 2026 00:51:13 +0000 (0:00:00.416) 0:00:00.416 ******* 2026-02-27 00:51:42.994475 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:51:42.994483 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:51:42.994491 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:51:42.994499 | orchestrator | 2026-02-27 00:51:42.994507 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-27 00:51:42.994531 | orchestrator | Friday 27 February 2026 00:51:13 +0000 (0:00:00.355) 0:00:00.772 ******* 2026-02-27 00:51:42.994539 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-02-27 00:51:42.994547 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-02-27 00:51:42.994555 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-02-27 00:51:42.994563 | orchestrator | 2026-02-27 00:51:42.994571 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-02-27 00:51:42.994578 | orchestrator | 2026-02-27 00:51:42.994586 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-02-27 00:51:42.994594 | orchestrator | Friday 27 February 2026 00:51:14 +0000 (0:00:00.765) 0:00:01.537 ******* 2026-02-27 00:51:42.994602 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:51:42.994610 | orchestrator | 2026-02-27 00:51:42.994617 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-02-27 00:51:42.994625 | orchestrator | Friday 27 February 2026 00:51:14 +0000 (0:00:00.732) 0:00:02.270 ******* 2026-02-27 00:51:42.994636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-27 00:51:42.994649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-27 00:51:42.994658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-27 00:51:42.994681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-27 00:51:42.994695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-27 00:51:42.994722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-27 00:51:42.994740 | orchestrator | 2026-02-27 00:51:42.994753 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-02-27 00:51:42.994765 | orchestrator | Friday 27 February 2026 00:51:16 +0000 (0:00:01.622) 0:00:03.892 ******* 2026-02-27 00:51:42.994777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-27 00:51:42.994791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-27 00:51:42.994813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-27 00:51:42.994826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-27 00:51:42.994845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-27 00:51:42.994877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-27 00:51:42.994891 | orchestrator | 2026-02-27 00:51:42.994905 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-02-27 00:51:42.994917 | orchestrator | Friday 27 February 2026 00:51:19 +0000 (0:00:03.083) 0:00:06.976 ******* 2026-02-27 00:51:42.994931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-27 00:51:42.994945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-27 00:51:42.994966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-27 00:51:42.994975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-27 00:51:42.994984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-27 00:51:42.994999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-27 00:51:42.995007 | orchestrator | 2026-02-27 00:51:42.995015 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-02-27 00:51:42.995029 | orchestrator | Friday 27 February 2026 00:51:22 +0000 (0:00:02.993) 0:00:09.970 ******* 2026-02-27 00:51:42.995038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-27 00:51:42.995046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-27 00:51:42.995067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-27 00:51:42.995075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-27 00:51:42.995087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-27 00:51:42.995100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-27 00:51:42.995108 | orchestrator | 2026-02-27 00:51:42.995116 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-27 00:51:42.995125 | orchestrator | Friday 27 February 2026 00:51:24 +0000 (0:00:02.248) 0:00:12.218 ******* 2026-02-27 00:51:42.995133 | orchestrator | 2026-02-27 00:51:42.995140 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-27 00:51:42.995148 | orchestrator | Friday 27 February 2026 00:51:25 +0000 (0:00:00.178) 0:00:12.397 ******* 2026-02-27 00:51:42.995156 | orchestrator | 2026-02-27 00:51:42.995164 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-27 00:51:42.995172 | orchestrator | Friday 27 February 2026 00:51:25 +0000 (0:00:00.119) 0:00:12.517 ******* 2026-02-27 00:51:42.995179 | orchestrator | 2026-02-27 00:51:42.995187 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-02-27 00:51:42.995200 | orchestrator | Friday 27 February 2026 00:51:25 +0000 (0:00:00.241) 0:00:12.758 ******* 2026-02-27 00:51:42.995208 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:51:42.995216 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:51:42.995223 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:51:42.995231 | orchestrator | 2026-02-27 00:51:42.995257 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-02-27 00:51:42.995266 | orchestrator | Friday 27 February 2026 00:51:30 +0000 (0:00:04.717) 0:00:17.475 ******* 2026-02-27 00:51:42.995273 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:51:42.995281 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:51:42.995289 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:51:42.995297 | orchestrator | 2026-02-27 00:51:42.995305 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 00:51:42.995313 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 00:51:42.995321 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 00:51:42.995329 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 00:51:42.995337 | orchestrator | 2026-02-27 00:51:42.995345 | orchestrator | 2026-02-27 00:51:42.995352 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 00:51:42.995360 | orchestrator | Friday 27 February 2026 00:51:39 +0000 (0:00:09.578) 0:00:27.054 ******* 2026-02-27 00:51:42.995368 | orchestrator | =============================================================================== 2026-02-27 00:51:42.995376 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 9.58s 2026-02-27 00:51:42.995384 | orchestrator | redis : Restart redis container ----------------------------------------- 4.72s 2026-02-27 00:51:42.995391 | orchestrator | redis : Copying over default config.json files -------------------------- 3.08s 2026-02-27 00:51:42.995399 | orchestrator | redis : Copying over redis config files --------------------------------- 2.99s 2026-02-27 00:51:42.995407 | orchestrator | redis : Check redis containers ------------------------------------------ 2.25s 2026-02-27 00:51:42.995415 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.63s 2026-02-27 00:51:42.995422 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.77s 2026-02-27 00:51:42.995430 | orchestrator | redis : include_tasks --------------------------------------------------- 0.73s 2026-02-27 00:51:42.995438 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.54s 2026-02-27 00:51:42.995446 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.36s 2026-02-27 00:51:42.995454 | orchestrator | 2026-02-27 00:51:42 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:51:42.995462 | orchestrator | 2026-02-27 00:51:42 | INFO  | Task 512fc33d-5135-4d3a-a0fd-f1f371cd7395 is in state SUCCESS 2026-02-27 00:51:42.995474 | orchestrator | 2026-02-27 00:51:42 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:51:42.995482 | orchestrator | 2026-02-27 00:51:42 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:51:46.090723 | orchestrator | 2026-02-27 00:51:46 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:51:46.091391 | orchestrator | 2026-02-27 00:51:46 | INFO  | Task aea4793b-952c-4d9d-be7f-59a791bec5cf is in state STARTED 2026-02-27 00:51:46.093847 | orchestrator | 2026-02-27 00:51:46 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:51:46.095861 | orchestrator | 2026-02-27 00:51:46 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:51:46.097489 | orchestrator | 2026-02-27 00:51:46 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:51:46.097523 | orchestrator | 2026-02-27 00:51:46 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:51:49.166781 | orchestrator | 2026-02-27 00:51:49 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:51:49.169137 | orchestrator | 2026-02-27 00:51:49 | INFO  | Task aea4793b-952c-4d9d-be7f-59a791bec5cf is in state STARTED 2026-02-27 00:51:49.171322 | orchestrator | 2026-02-27 00:51:49 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:51:49.174620 | orchestrator | 2026-02-27 00:51:49 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:51:49.175716 | orchestrator | 2026-02-27 00:51:49 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:51:49.176012 | orchestrator | 2026-02-27 00:51:49 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:51:52.248080 | orchestrator | 2026-02-27 00:51:52 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:51:52.248931 | orchestrator | 2026-02-27 00:51:52 | INFO  | Task aea4793b-952c-4d9d-be7f-59a791bec5cf is in state STARTED 2026-02-27 00:51:52.252190 | orchestrator | 2026-02-27 00:51:52 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:51:52.253342 | orchestrator | 2026-02-27 00:51:52 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:51:52.255535 | orchestrator | 2026-02-27 00:51:52 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:51:52.255947 | orchestrator | 2026-02-27 00:51:52 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:51:55.525634 | orchestrator | 2026-02-27 00:51:55 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:51:55.525763 | orchestrator | 2026-02-27 00:51:55 | INFO  | Task aea4793b-952c-4d9d-be7f-59a791bec5cf is in state STARTED 2026-02-27 00:51:55.525780 | orchestrator | 2026-02-27 00:51:55 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:51:55.525792 | orchestrator | 2026-02-27 00:51:55 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:51:55.525803 | orchestrator | 2026-02-27 00:51:55 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:51:55.525814 | orchestrator | 2026-02-27 00:51:55 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:51:58.732557 | orchestrator | 2026-02-27 00:51:58 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:51:58.732649 | orchestrator | 2026-02-27 00:51:58 | INFO  | Task aea4793b-952c-4d9d-be7f-59a791bec5cf is in state STARTED 2026-02-27 00:51:58.732661 | orchestrator | 2026-02-27 00:51:58 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:51:58.732670 | orchestrator | 2026-02-27 00:51:58 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:51:58.732679 | orchestrator | 2026-02-27 00:51:58 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:51:58.732687 | orchestrator | 2026-02-27 00:51:58 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:52:01.815805 | orchestrator | 2026-02-27 00:52:01 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:52:01.817620 | orchestrator | 2026-02-27 00:52:01 | INFO  | Task aea4793b-952c-4d9d-be7f-59a791bec5cf is in state STARTED 2026-02-27 00:52:01.819376 | orchestrator | 2026-02-27 00:52:01 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:52:01.821344 | orchestrator | 2026-02-27 00:52:01 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:52:01.823964 | orchestrator | 2026-02-27 00:52:01 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:52:01.824276 | orchestrator | 2026-02-27 00:52:01 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:52:04.865561 | orchestrator | 2026-02-27 00:52:04 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:52:04.865650 | orchestrator | 2026-02-27 00:52:04 | INFO  | Task aea4793b-952c-4d9d-be7f-59a791bec5cf is in state STARTED 2026-02-27 00:52:04.865665 | orchestrator | 2026-02-27 00:52:04 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:52:04.865676 | orchestrator | 2026-02-27 00:52:04 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:52:04.865687 | orchestrator | 2026-02-27 00:52:04 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:52:04.865699 | orchestrator | 2026-02-27 00:52:04 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:52:07.883640 | orchestrator | 2026-02-27 00:52:07 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:52:07.884298 | orchestrator | 2026-02-27 00:52:07 | INFO  | Task aea4793b-952c-4d9d-be7f-59a791bec5cf is in state STARTED 2026-02-27 00:52:07.884706 | orchestrator | 2026-02-27 00:52:07 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:52:07.885332 | orchestrator | 2026-02-27 00:52:07 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:52:07.886294 | orchestrator | 2026-02-27 00:52:07 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:52:07.886501 | orchestrator | 2026-02-27 00:52:07 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:52:10.938299 | orchestrator | 2026-02-27 00:52:10 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:52:10.938688 | orchestrator | 2026-02-27 00:52:10 | INFO  | Task aea4793b-952c-4d9d-be7f-59a791bec5cf is in state STARTED 2026-02-27 00:52:10.941035 | orchestrator | 2026-02-27 00:52:10 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:52:10.941490 | orchestrator | 2026-02-27 00:52:10 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:52:10.942248 | orchestrator | 2026-02-27 00:52:10 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:52:10.942413 | orchestrator | 2026-02-27 00:52:10 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:52:14.013630 | orchestrator | 2026-02-27 00:52:13 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:52:14.013713 | orchestrator | 2026-02-27 00:52:13 | INFO  | Task aea4793b-952c-4d9d-be7f-59a791bec5cf is in state STARTED 2026-02-27 00:52:14.013727 | orchestrator | 2026-02-27 00:52:13 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:52:14.013738 | orchestrator | 2026-02-27 00:52:13 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:52:14.013748 | orchestrator | 2026-02-27 00:52:13 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:52:14.013755 | orchestrator | 2026-02-27 00:52:13 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:52:17.171989 | orchestrator | 2026-02-27 00:52:17 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:52:17.174392 | orchestrator | 2026-02-27 00:52:17 | INFO  | Task aea4793b-952c-4d9d-be7f-59a791bec5cf is in state STARTED 2026-02-27 00:52:17.181410 | orchestrator | 2026-02-27 00:52:17 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:52:17.183572 | orchestrator | 2026-02-27 00:52:17 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:52:17.184584 | orchestrator | 2026-02-27 00:52:17 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:52:17.184698 | orchestrator | 2026-02-27 00:52:17 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:52:20.259107 | orchestrator | 2026-02-27 00:52:20 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:52:20.263923 | orchestrator | 2026-02-27 00:52:20 | INFO  | Task aea4793b-952c-4d9d-be7f-59a791bec5cf is in state STARTED 2026-02-27 00:52:20.263999 | orchestrator | 2026-02-27 00:52:20 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:52:20.264041 | orchestrator | 2026-02-27 00:52:20 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:52:20.264053 | orchestrator | 2026-02-27 00:52:20 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:52:20.264065 | orchestrator | 2026-02-27 00:52:20 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:52:23.311576 | orchestrator | 2026-02-27 00:52:23 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:52:23.313113 | orchestrator | 2026-02-27 00:52:23 | INFO  | Task aea4793b-952c-4d9d-be7f-59a791bec5cf is in state STARTED 2026-02-27 00:52:23.315336 | orchestrator | 2026-02-27 00:52:23 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:52:23.317175 | orchestrator | 2026-02-27 00:52:23 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:52:23.319889 | orchestrator | 2026-02-27 00:52:23 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:52:23.319925 | orchestrator | 2026-02-27 00:52:23 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:52:26.383410 | orchestrator | 2026-02-27 00:52:26 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:52:26.383861 | orchestrator | 2026-02-27 00:52:26 | INFO  | Task aea4793b-952c-4d9d-be7f-59a791bec5cf is in state STARTED 2026-02-27 00:52:26.387952 | orchestrator | 2026-02-27 00:52:26 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:52:26.388533 | orchestrator | 2026-02-27 00:52:26 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:52:26.389711 | orchestrator | 2026-02-27 00:52:26 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:52:26.389761 | orchestrator | 2026-02-27 00:52:26 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:52:29.424711 | orchestrator | 2026-02-27 00:52:29 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:52:29.425860 | orchestrator | 2026-02-27 00:52:29 | INFO  | Task aea4793b-952c-4d9d-be7f-59a791bec5cf is in state STARTED 2026-02-27 00:52:29.427002 | orchestrator | 2026-02-27 00:52:29 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:52:29.428106 | orchestrator | 2026-02-27 00:52:29 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:52:29.429638 | orchestrator | 2026-02-27 00:52:29 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:52:29.429691 | orchestrator | 2026-02-27 00:52:29 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:52:32.482003 | orchestrator | 2026-02-27 00:52:32 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:52:32.483644 | orchestrator | 2026-02-27 00:52:32 | INFO  | Task aea4793b-952c-4d9d-be7f-59a791bec5cf is in state SUCCESS 2026-02-27 00:52:32.485069 | orchestrator | 2026-02-27 00:52:32.487119 | orchestrator | 2026-02-27 00:52:32.487165 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-27 00:52:32.487174 | orchestrator | 2026-02-27 00:52:32.487181 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-27 00:52:32.487188 | orchestrator | Friday 27 February 2026 00:51:13 +0000 (0:00:00.366) 0:00:00.366 ******* 2026-02-27 00:52:32.487195 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:52:32.487274 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:52:32.487280 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:52:32.487287 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:52:32.487294 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:52:32.487301 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:52:32.487308 | orchestrator | 2026-02-27 00:52:32.487315 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-27 00:52:32.487321 | orchestrator | Friday 27 February 2026 00:51:14 +0000 (0:00:00.993) 0:00:01.359 ******* 2026-02-27 00:52:32.487328 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-27 00:52:32.487335 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-27 00:52:32.487341 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-27 00:52:32.487348 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-27 00:52:32.487354 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-27 00:52:32.487361 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-27 00:52:32.487367 | orchestrator | 2026-02-27 00:52:32.487373 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-02-27 00:52:32.487380 | orchestrator | 2026-02-27 00:52:32.487386 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-02-27 00:52:32.487392 | orchestrator | Friday 27 February 2026 00:51:15 +0000 (0:00:00.963) 0:00:02.323 ******* 2026-02-27 00:52:32.487406 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-27 00:52:32.487413 | orchestrator | 2026-02-27 00:52:32.487419 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-27 00:52:32.487425 | orchestrator | Friday 27 February 2026 00:51:17 +0000 (0:00:01.859) 0:00:04.182 ******* 2026-02-27 00:52:32.487430 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-02-27 00:52:32.487436 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-02-27 00:52:32.487441 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-02-27 00:52:32.487447 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-02-27 00:52:32.487453 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-02-27 00:52:32.487459 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-02-27 00:52:32.487464 | orchestrator | 2026-02-27 00:52:32.487470 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-27 00:52:32.487476 | orchestrator | Friday 27 February 2026 00:51:19 +0000 (0:00:01.646) 0:00:05.829 ******* 2026-02-27 00:52:32.487482 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-02-27 00:52:32.487488 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-02-27 00:52:32.487494 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-02-27 00:52:32.487500 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-02-27 00:52:32.487524 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-02-27 00:52:32.487531 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-02-27 00:52:32.487537 | orchestrator | 2026-02-27 00:52:32.487543 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-27 00:52:32.487550 | orchestrator | Friday 27 February 2026 00:51:20 +0000 (0:00:01.876) 0:00:07.706 ******* 2026-02-27 00:52:32.487556 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-02-27 00:52:32.487562 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:52:32.487569 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-02-27 00:52:32.487575 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:52:32.487582 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-02-27 00:52:32.487587 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:52:32.487593 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-02-27 00:52:32.487599 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:52:32.487605 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-02-27 00:52:32.487611 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:52:32.487616 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-02-27 00:52:32.487622 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:52:32.487628 | orchestrator | 2026-02-27 00:52:32.487634 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-02-27 00:52:32.487640 | orchestrator | Friday 27 February 2026 00:51:22 +0000 (0:00:01.740) 0:00:09.447 ******* 2026-02-27 00:52:32.487646 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:52:32.487651 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:52:32.487657 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:52:32.487662 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:52:32.487669 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:52:32.487675 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:52:32.487682 | orchestrator | 2026-02-27 00:52:32.487688 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-02-27 00:52:32.487695 | orchestrator | Friday 27 February 2026 00:51:23 +0000 (0:00:01.057) 0:00:10.504 ******* 2026-02-27 00:52:32.487720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-27 00:52:32.487731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-27 00:52:32.487744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-27 00:52:32.487760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-27 00:52:32.487767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-27 00:52:32.487779 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-27 00:52:32.487787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-27 00:52:32.487794 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-27 00:52:32.487809 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-27 00:52:32.487816 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-27 00:52:32.487823 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-27 00:52:32.487834 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-27 00:52:32.487841 | orchestrator | 2026-02-27 00:52:32.487848 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-02-27 00:52:32.487854 | orchestrator | Friday 27 February 2026 00:51:27 +0000 (0:00:03.631) 0:00:14.135 ******* 2026-02-27 00:52:32.487861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-27 00:52:32.487876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-27 00:52:32.487883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-27 00:52:32.487890 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-27 00:52:32.487897 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-27 00:52:32.487908 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-27 00:52:32.487916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-27 00:52:32.487930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-27 00:52:32.487937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-27 00:52:32.487944 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-27 00:52:32.487955 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-27 00:52:32.487962 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-27 00:52:32.487972 | orchestrator | 2026-02-27 00:52:32.487979 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-02-27 00:52:32.487986 | orchestrator | Friday 27 February 2026 00:51:32 +0000 (0:00:04.775) 0:00:18.910 ******* 2026-02-27 00:52:32.487993 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:52:32.487999 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:52:32.488006 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:52:32.488013 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:52:32.488019 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:52:32.488026 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:52:32.488033 | orchestrator | 2026-02-27 00:52:32.488039 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-02-27 00:52:32.488046 | orchestrator | Friday 27 February 2026 00:51:34 +0000 (0:00:02.404) 0:00:21.315 ******* 2026-02-27 00:52:32.488054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-27 00:52:32.488060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-27 00:52:32.488067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-27 00:52:32.488079 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-27 00:52:32.488086 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-27 00:52:32.488105 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-27 00:52:32.488113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-27 00:52:32.488119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-27 00:52:32.488125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-27 00:52:32.488143 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-27 00:52:32.488158 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-27 00:52:32.488167 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-27 00:52:32.488174 | orchestrator | 2026-02-27 00:52:32.488181 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-27 00:52:32.488187 | orchestrator | Friday 27 February 2026 00:51:37 +0000 (0:00:03.198) 0:00:24.514 ******* 2026-02-27 00:52:32.488193 | orchestrator | 2026-02-27 00:52:32.488240 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-27 00:52:32.488247 | orchestrator | Friday 27 February 2026 00:51:37 +0000 (0:00:00.183) 0:00:24.698 ******* 2026-02-27 00:52:32.488254 | orchestrator | 2026-02-27 00:52:32.488260 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-27 00:52:32.488266 | orchestrator | Friday 27 February 2026 00:51:38 +0000 (0:00:00.358) 0:00:25.056 ******* 2026-02-27 00:52:32.488272 | orchestrator | 2026-02-27 00:52:32.488278 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-27 00:52:32.488284 | orchestrator | Friday 27 February 2026 00:51:38 +0000 (0:00:00.347) 0:00:25.404 ******* 2026-02-27 00:52:32.488291 | orchestrator | 2026-02-27 00:52:32.488297 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-27 00:52:32.488303 | orchestrator | Friday 27 February 2026 00:51:38 +0000 (0:00:00.205) 0:00:25.609 ******* 2026-02-27 00:52:32.488310 | orchestrator | 2026-02-27 00:52:32.488316 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-27 00:52:32.488322 | orchestrator | Friday 27 February 2026 00:51:38 +0000 (0:00:00.201) 0:00:25.810 ******* 2026-02-27 00:52:32.488328 | orchestrator | 2026-02-27 00:52:32.488335 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-02-27 00:52:32.488341 | orchestrator | Friday 27 February 2026 00:51:39 +0000 (0:00:00.272) 0:00:26.082 ******* 2026-02-27 00:52:32.488347 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:52:32.488354 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:52:32.488361 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:52:32.488367 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:52:32.488374 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:52:32.488380 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:52:32.488386 | orchestrator | 2026-02-27 00:52:32.488392 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-02-27 00:52:32.488398 | orchestrator | Friday 27 February 2026 00:51:52 +0000 (0:00:13.265) 0:00:39.348 ******* 2026-02-27 00:52:32.488412 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:52:32.488418 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:52:32.488425 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:52:32.488430 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:52:32.488436 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:52:32.488443 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:52:32.488448 | orchestrator | 2026-02-27 00:52:32.488454 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-27 00:52:32.488460 | orchestrator | Friday 27 February 2026 00:51:54 +0000 (0:00:02.101) 0:00:41.450 ******* 2026-02-27 00:52:32.488466 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:52:32.488472 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:52:32.488477 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:52:32.488484 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:52:32.488490 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:52:32.488496 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:52:32.488502 | orchestrator | 2026-02-27 00:52:32.488508 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-02-27 00:52:32.488514 | orchestrator | Friday 27 February 2026 00:52:05 +0000 (0:00:10.484) 0:00:51.934 ******* 2026-02-27 00:52:32.488636 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-02-27 00:52:32.488650 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-02-27 00:52:32.488656 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-02-27 00:52:32.488662 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-02-27 00:52:32.488668 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-02-27 00:52:32.488674 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-02-27 00:52:32.488680 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-02-27 00:52:32.488686 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-02-27 00:52:32.488692 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-02-27 00:52:32.488699 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-02-27 00:52:32.488705 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-02-27 00:52:32.488711 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-02-27 00:52:32.488717 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-27 00:52:32.488730 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-27 00:52:32.488737 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-27 00:52:32.488743 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-27 00:52:32.488749 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-27 00:52:32.488755 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-27 00:52:32.488761 | orchestrator | 2026-02-27 00:52:32.488768 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-02-27 00:52:32.488775 | orchestrator | Friday 27 February 2026 00:52:12 +0000 (0:00:07.769) 0:00:59.704 ******* 2026-02-27 00:52:32.488792 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-02-27 00:52:32.488799 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:52:32.488805 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-02-27 00:52:32.488810 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:52:32.488816 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-02-27 00:52:32.488822 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:52:32.488828 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-02-27 00:52:32.488833 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-02-27 00:52:32.488839 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-02-27 00:52:32.488845 | orchestrator | 2026-02-27 00:52:32.488852 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-02-27 00:52:32.488858 | orchestrator | Friday 27 February 2026 00:52:16 +0000 (0:00:03.355) 0:01:03.059 ******* 2026-02-27 00:52:32.488864 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-02-27 00:52:32.488870 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:52:32.488876 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-02-27 00:52:32.488882 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:52:32.488888 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-02-27 00:52:32.488894 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:52:32.488900 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-02-27 00:52:32.488907 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-02-27 00:52:32.488913 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-02-27 00:52:32.488919 | orchestrator | 2026-02-27 00:52:32.488924 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-27 00:52:32.488931 | orchestrator | Friday 27 February 2026 00:52:20 +0000 (0:00:04.464) 0:01:07.523 ******* 2026-02-27 00:52:32.488936 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:52:32.488942 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:52:32.488949 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:52:32.488955 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:52:32.488961 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:52:32.488967 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:52:32.488973 | orchestrator | 2026-02-27 00:52:32.488979 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 00:52:32.488986 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-27 00:52:32.489002 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-27 00:52:32.489009 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-27 00:52:32.489016 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-27 00:52:32.489022 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-27 00:52:32.489028 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-27 00:52:32.489033 | orchestrator | 2026-02-27 00:52:32.489039 | orchestrator | 2026-02-27 00:52:32.489045 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 00:52:32.489051 | orchestrator | Friday 27 February 2026 00:52:30 +0000 (0:00:09.747) 0:01:17.271 ******* 2026-02-27 00:52:32.489058 | orchestrator | =============================================================================== 2026-02-27 00:52:32.489072 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 20.23s 2026-02-27 00:52:32.489078 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 13.27s 2026-02-27 00:52:32.489084 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.77s 2026-02-27 00:52:32.489090 | orchestrator | openvswitch : Copying over config.json files for services --------------- 4.78s 2026-02-27 00:52:32.489096 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.46s 2026-02-27 00:52:32.489102 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 3.63s 2026-02-27 00:52:32.489114 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.36s 2026-02-27 00:52:32.489121 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.20s 2026-02-27 00:52:32.489127 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 2.40s 2026-02-27 00:52:32.489133 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.10s 2026-02-27 00:52:32.489139 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.88s 2026-02-27 00:52:32.489145 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.86s 2026-02-27 00:52:32.489151 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.74s 2026-02-27 00:52:32.489157 | orchestrator | module-load : Load modules ---------------------------------------------- 1.65s 2026-02-27 00:52:32.489163 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.57s 2026-02-27 00:52:32.489169 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.06s 2026-02-27 00:52:32.489175 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.99s 2026-02-27 00:52:32.489181 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.96s 2026-02-27 00:52:32.489187 | orchestrator | 2026-02-27 00:52:32 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:52:32.489553 | orchestrator | 2026-02-27 00:52:32 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:52:32.492853 | orchestrator | 2026-02-27 00:52:32 | INFO  | Task 6b585c0e-0830-41d2-a6a4-15b16541ac8b is in state STARTED 2026-02-27 00:52:32.493985 | orchestrator | 2026-02-27 00:52:32 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:52:32.494140 | orchestrator | 2026-02-27 00:52:32 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:52:35.533116 | orchestrator | 2026-02-27 00:52:35 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:52:35.538422 | orchestrator | 2026-02-27 00:52:35 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:52:35.540936 | orchestrator | 2026-02-27 00:52:35 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:52:35.543171 | orchestrator | 2026-02-27 00:52:35 | INFO  | Task 6b585c0e-0830-41d2-a6a4-15b16541ac8b is in state STARTED 2026-02-27 00:52:35.546266 | orchestrator | 2026-02-27 00:52:35 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:52:35.546315 | orchestrator | 2026-02-27 00:52:35 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:52:38.666670 | orchestrator | 2026-02-27 00:52:38 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:52:38.666753 | orchestrator | 2026-02-27 00:52:38 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:52:38.666762 | orchestrator | 2026-02-27 00:52:38 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:52:38.666769 | orchestrator | 2026-02-27 00:52:38 | INFO  | Task 6b585c0e-0830-41d2-a6a4-15b16541ac8b is in state STARTED 2026-02-27 00:52:38.666796 | orchestrator | 2026-02-27 00:52:38 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:52:38.666803 | orchestrator | 2026-02-27 00:52:38 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:52:41.639795 | orchestrator | 2026-02-27 00:52:41 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:52:41.644468 | orchestrator | 2026-02-27 00:52:41 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:52:41.646361 | orchestrator | 2026-02-27 00:52:41 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:52:41.650151 | orchestrator | 2026-02-27 00:52:41 | INFO  | Task 6b585c0e-0830-41d2-a6a4-15b16541ac8b is in state STARTED 2026-02-27 00:52:41.650854 | orchestrator | 2026-02-27 00:52:41 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:52:41.650982 | orchestrator | 2026-02-27 00:52:41 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:52:44.689952 | orchestrator | 2026-02-27 00:52:44 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:52:44.690167 | orchestrator | 2026-02-27 00:52:44 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:52:44.691074 | orchestrator | 2026-02-27 00:52:44 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:52:44.692777 | orchestrator | 2026-02-27 00:52:44 | INFO  | Task 6b585c0e-0830-41d2-a6a4-15b16541ac8b is in state STARTED 2026-02-27 00:52:44.693759 | orchestrator | 2026-02-27 00:52:44 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:52:44.693894 | orchestrator | 2026-02-27 00:52:44 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:52:47.726949 | orchestrator | 2026-02-27 00:52:47 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:52:47.728837 | orchestrator | 2026-02-27 00:52:47 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:52:47.729999 | orchestrator | 2026-02-27 00:52:47 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:52:47.731072 | orchestrator | 2026-02-27 00:52:47 | INFO  | Task 6b585c0e-0830-41d2-a6a4-15b16541ac8b is in state STARTED 2026-02-27 00:52:47.733909 | orchestrator | 2026-02-27 00:52:47 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:52:47.733960 | orchestrator | 2026-02-27 00:52:47 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:52:50.769110 | orchestrator | 2026-02-27 00:52:50 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:52:50.771482 | orchestrator | 2026-02-27 00:52:50 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:52:50.772339 | orchestrator | 2026-02-27 00:52:50 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:52:50.773490 | orchestrator | 2026-02-27 00:52:50 | INFO  | Task 6b585c0e-0830-41d2-a6a4-15b16541ac8b is in state STARTED 2026-02-27 00:52:50.774476 | orchestrator | 2026-02-27 00:52:50 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:52:50.774509 | orchestrator | 2026-02-27 00:52:50 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:52:53.886407 | orchestrator | 2026-02-27 00:52:53 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:52:53.886653 | orchestrator | 2026-02-27 00:52:53 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:52:53.887909 | orchestrator | 2026-02-27 00:52:53 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:52:53.889982 | orchestrator | 2026-02-27 00:52:53 | INFO  | Task 6b585c0e-0830-41d2-a6a4-15b16541ac8b is in state STARTED 2026-02-27 00:52:53.890974 | orchestrator | 2026-02-27 00:52:53 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:52:53.891149 | orchestrator | 2026-02-27 00:52:53 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:52:56.927445 | orchestrator | 2026-02-27 00:52:56 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:52:56.931172 | orchestrator | 2026-02-27 00:52:56 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:52:56.932215 | orchestrator | 2026-02-27 00:52:56 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:52:56.933352 | orchestrator | 2026-02-27 00:52:56 | INFO  | Task 6b585c0e-0830-41d2-a6a4-15b16541ac8b is in state STARTED 2026-02-27 00:52:56.934356 | orchestrator | 2026-02-27 00:52:56 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:52:56.934424 | orchestrator | 2026-02-27 00:52:56 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:52:59.983967 | orchestrator | 2026-02-27 00:52:59 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:52:59.984142 | orchestrator | 2026-02-27 00:52:59 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:52:59.985588 | orchestrator | 2026-02-27 00:52:59 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:52:59.986223 | orchestrator | 2026-02-27 00:52:59 | INFO  | Task 6b585c0e-0830-41d2-a6a4-15b16541ac8b is in state STARTED 2026-02-27 00:52:59.987048 | orchestrator | 2026-02-27 00:52:59 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:52:59.987119 | orchestrator | 2026-02-27 00:52:59 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:53:03.028239 | orchestrator | 2026-02-27 00:53:03 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:53:03.028542 | orchestrator | 2026-02-27 00:53:03 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:53:03.029314 | orchestrator | 2026-02-27 00:53:03 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:53:03.032418 | orchestrator | 2026-02-27 00:53:03 | INFO  | Task 6b585c0e-0830-41d2-a6a4-15b16541ac8b is in state STARTED 2026-02-27 00:53:03.033392 | orchestrator | 2026-02-27 00:53:03 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:53:03.033440 | orchestrator | 2026-02-27 00:53:03 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:53:06.074995 | orchestrator | 2026-02-27 00:53:06 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:53:06.075479 | orchestrator | 2026-02-27 00:53:06 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:53:06.076692 | orchestrator | 2026-02-27 00:53:06 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:53:06.077358 | orchestrator | 2026-02-27 00:53:06 | INFO  | Task 6b585c0e-0830-41d2-a6a4-15b16541ac8b is in state STARTED 2026-02-27 00:53:06.078207 | orchestrator | 2026-02-27 00:53:06 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:53:06.078298 | orchestrator | 2026-02-27 00:53:06 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:53:09.154430 | orchestrator | 2026-02-27 00:53:09 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:53:09.155860 | orchestrator | 2026-02-27 00:53:09 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:53:09.155930 | orchestrator | 2026-02-27 00:53:09 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:53:09.156139 | orchestrator | 2026-02-27 00:53:09 | INFO  | Task 6b585c0e-0830-41d2-a6a4-15b16541ac8b is in state STARTED 2026-02-27 00:53:09.156826 | orchestrator | 2026-02-27 00:53:09 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:53:09.156858 | orchestrator | 2026-02-27 00:53:09 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:53:12.202236 | orchestrator | 2026-02-27 00:53:12 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:53:12.203735 | orchestrator | 2026-02-27 00:53:12 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:53:12.207876 | orchestrator | 2026-02-27 00:53:12 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:53:12.212036 | orchestrator | 2026-02-27 00:53:12 | INFO  | Task 6b585c0e-0830-41d2-a6a4-15b16541ac8b is in state STARTED 2026-02-27 00:53:12.214480 | orchestrator | 2026-02-27 00:53:12 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:53:12.215215 | orchestrator | 2026-02-27 00:53:12 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:53:15.255307 | orchestrator | 2026-02-27 00:53:15 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:53:15.255491 | orchestrator | 2026-02-27 00:53:15 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:53:15.256371 | orchestrator | 2026-02-27 00:53:15 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:53:15.256960 | orchestrator | 2026-02-27 00:53:15 | INFO  | Task 6b585c0e-0830-41d2-a6a4-15b16541ac8b is in state STARTED 2026-02-27 00:53:15.257741 | orchestrator | 2026-02-27 00:53:15 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:53:15.257769 | orchestrator | 2026-02-27 00:53:15 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:53:18.285319 | orchestrator | 2026-02-27 00:53:18 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:53:18.286309 | orchestrator | 2026-02-27 00:53:18 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:53:18.287734 | orchestrator | 2026-02-27 00:53:18 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:53:18.289226 | orchestrator | 2026-02-27 00:53:18 | INFO  | Task 6b585c0e-0830-41d2-a6a4-15b16541ac8b is in state STARTED 2026-02-27 00:53:18.289623 | orchestrator | 2026-02-27 00:53:18 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:53:18.289649 | orchestrator | 2026-02-27 00:53:18 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:53:21.547380 | orchestrator | 2026-02-27 00:53:21 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:53:21.566270 | orchestrator | 2026-02-27 00:53:21 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:53:21.570971 | orchestrator | 2026-02-27 00:53:21 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:53:21.574884 | orchestrator | 2026-02-27 00:53:21 | INFO  | Task 6b585c0e-0830-41d2-a6a4-15b16541ac8b is in state STARTED 2026-02-27 00:53:21.582951 | orchestrator | 2026-02-27 00:53:21 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:53:21.583074 | orchestrator | 2026-02-27 00:53:21 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:53:24.723450 | orchestrator | 2026-02-27 00:53:24 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:53:24.724326 | orchestrator | 2026-02-27 00:53:24 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:53:24.725579 | orchestrator | 2026-02-27 00:53:24 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:53:24.728080 | orchestrator | 2026-02-27 00:53:24 | INFO  | Task 6b585c0e-0830-41d2-a6a4-15b16541ac8b is in state STARTED 2026-02-27 00:53:24.729304 | orchestrator | 2026-02-27 00:53:24 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:53:24.729500 | orchestrator | 2026-02-27 00:53:24 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:53:27.903744 | orchestrator | 2026-02-27 00:53:27 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:53:27.903834 | orchestrator | 2026-02-27 00:53:27 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:53:27.903849 | orchestrator | 2026-02-27 00:53:27 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:53:27.903860 | orchestrator | 2026-02-27 00:53:27 | INFO  | Task 6b585c0e-0830-41d2-a6a4-15b16541ac8b is in state STARTED 2026-02-27 00:53:27.903871 | orchestrator | 2026-02-27 00:53:27 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:53:27.903882 | orchestrator | 2026-02-27 00:53:27 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:53:31.038870 | orchestrator | 2026-02-27 00:53:31 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:53:31.040511 | orchestrator | 2026-02-27 00:53:31 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:53:31.043968 | orchestrator | 2026-02-27 00:53:31 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:53:31.061385 | orchestrator | 2026-02-27 00:53:31 | INFO  | Task 6b585c0e-0830-41d2-a6a4-15b16541ac8b is in state STARTED 2026-02-27 00:53:31.061484 | orchestrator | 2026-02-27 00:53:31 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:53:31.061498 | orchestrator | 2026-02-27 00:53:31 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:53:34.101468 | orchestrator | 2026-02-27 00:53:34 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:53:34.103715 | orchestrator | 2026-02-27 00:53:34 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:53:34.105380 | orchestrator | 2026-02-27 00:53:34 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:53:34.107973 | orchestrator | 2026-02-27 00:53:34 | INFO  | Task 6b585c0e-0830-41d2-a6a4-15b16541ac8b is in state STARTED 2026-02-27 00:53:34.112822 | orchestrator | 2026-02-27 00:53:34 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state STARTED 2026-02-27 00:53:34.112916 | orchestrator | 2026-02-27 00:53:34 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:53:37.166962 | orchestrator | 2026-02-27 00:53:37 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:53:37.167915 | orchestrator | 2026-02-27 00:53:37 | INFO  | Task d939b87b-4a2c-449f-8d64-a023a5e00b45 is in state STARTED 2026-02-27 00:53:37.168821 | orchestrator | 2026-02-27 00:53:37 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:53:37.170133 | orchestrator | 2026-02-27 00:53:37 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:53:37.172846 | orchestrator | 2026-02-27 00:53:37 | INFO  | Task 6b585c0e-0830-41d2-a6a4-15b16541ac8b is in state STARTED 2026-02-27 00:53:37.174734 | orchestrator | 2026-02-27 00:53:37 | INFO  | Task 2e64287f-49d7-47de-89c4-6c8c4ec85c0d is in state SUCCESS 2026-02-27 00:53:37.177917 | orchestrator | 2026-02-27 00:53:37.177987 | orchestrator | 2026-02-27 00:53:37.178014 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-02-27 00:53:37.178095 | orchestrator | 2026-02-27 00:53:37.178112 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-02-27 00:53:37.178131 | orchestrator | Friday 27 February 2026 00:48:35 +0000 (0:00:00.235) 0:00:00.235 ******* 2026-02-27 00:53:37.178194 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:53:37.178214 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:53:37.178239 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:53:37.178249 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:53:37.178259 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:53:37.178268 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:53:37.178278 | orchestrator | 2026-02-27 00:53:37.178288 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-02-27 00:53:37.178302 | orchestrator | Friday 27 February 2026 00:48:35 +0000 (0:00:00.942) 0:00:01.178 ******* 2026-02-27 00:53:37.178323 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:53:37.178345 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:53:37.178361 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:53:37.178376 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:53:37.178392 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:53:37.178406 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:53:37.178420 | orchestrator | 2026-02-27 00:53:37.178433 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-02-27 00:53:37.178449 | orchestrator | Friday 27 February 2026 00:48:36 +0000 (0:00:00.770) 0:00:01.948 ******* 2026-02-27 00:53:37.178462 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:53:37.178477 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:53:37.178494 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:53:37.178510 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:53:37.178527 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:53:37.178545 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:53:37.178562 | orchestrator | 2026-02-27 00:53:37.178579 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-02-27 00:53:37.178592 | orchestrator | Friday 27 February 2026 00:48:37 +0000 (0:00:00.969) 0:00:02.917 ******* 2026-02-27 00:53:37.178603 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:53:37.178614 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:53:37.178625 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:53:37.178636 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:53:37.178648 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:53:37.178659 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:53:37.178670 | orchestrator | 2026-02-27 00:53:37.178681 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-02-27 00:53:37.178692 | orchestrator | Friday 27 February 2026 00:48:40 +0000 (0:00:02.632) 0:00:05.550 ******* 2026-02-27 00:53:37.178703 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:53:37.178714 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:53:37.178725 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:53:37.178735 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:53:37.178746 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:53:37.178757 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:53:37.178768 | orchestrator | 2026-02-27 00:53:37.178779 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-02-27 00:53:37.178790 | orchestrator | Friday 27 February 2026 00:48:41 +0000 (0:00:01.548) 0:00:07.098 ******* 2026-02-27 00:53:37.178820 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:53:37.178831 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:53:37.178842 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:53:37.178853 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:53:37.178865 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:53:37.178876 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:53:37.178885 | orchestrator | 2026-02-27 00:53:37.178895 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-02-27 00:53:37.178904 | orchestrator | Friday 27 February 2026 00:48:43 +0000 (0:00:01.258) 0:00:08.357 ******* 2026-02-27 00:53:37.178913 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:53:37.178923 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:53:37.178932 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:53:37.178941 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:53:37.178951 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:53:37.178960 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:53:37.178969 | orchestrator | 2026-02-27 00:53:37.178979 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-02-27 00:53:37.178988 | orchestrator | Friday 27 February 2026 00:48:44 +0000 (0:00:01.016) 0:00:09.374 ******* 2026-02-27 00:53:37.178998 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:53:37.179008 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:53:37.179017 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:53:37.179026 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:53:37.179036 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:53:37.179045 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:53:37.179054 | orchestrator | 2026-02-27 00:53:37.179064 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-02-27 00:53:37.179073 | orchestrator | Friday 27 February 2026 00:48:44 +0000 (0:00:00.539) 0:00:09.917 ******* 2026-02-27 00:53:37.179083 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-27 00:53:37.179092 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-27 00:53:37.179102 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:53:37.179111 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-27 00:53:37.179121 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-27 00:53:37.179130 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:53:37.179162 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-27 00:53:37.179173 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-27 00:53:37.179182 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:53:37.179192 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-27 00:53:37.179222 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-27 00:53:37.179232 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:53:37.179241 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-27 00:53:37.179251 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-27 00:53:37.179267 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:53:37.179277 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-27 00:53:37.179287 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-27 00:53:37.179296 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:53:37.179306 | orchestrator | 2026-02-27 00:53:37.179315 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-02-27 00:53:37.179325 | orchestrator | Friday 27 February 2026 00:48:45 +0000 (0:00:00.682) 0:00:10.600 ******* 2026-02-27 00:53:37.179334 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:53:37.179350 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:53:37.179360 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:53:37.179369 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:53:37.179379 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:53:37.179389 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:53:37.179398 | orchestrator | 2026-02-27 00:53:37.179408 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-02-27 00:53:37.179418 | orchestrator | Friday 27 February 2026 00:48:46 +0000 (0:00:01.189) 0:00:11.789 ******* 2026-02-27 00:53:37.179428 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:53:37.179438 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:53:37.179447 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:53:37.179457 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:53:37.179466 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:53:37.179475 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:53:37.179485 | orchestrator | 2026-02-27 00:53:37.179494 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-02-27 00:53:37.179504 | orchestrator | Friday 27 February 2026 00:48:47 +0000 (0:00:01.082) 0:00:12.871 ******* 2026-02-27 00:53:37.179514 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:53:37.179523 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:53:37.179533 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:53:37.179542 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:53:37.179552 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:53:37.179561 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:53:37.179571 | orchestrator | 2026-02-27 00:53:37.179580 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-02-27 00:53:37.179590 | orchestrator | Friday 27 February 2026 00:48:53 +0000 (0:00:05.491) 0:00:18.363 ******* 2026-02-27 00:53:37.179599 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:53:37.179609 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:53:37.179618 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:53:37.179628 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:53:37.179637 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:53:37.179647 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:53:37.179657 | orchestrator | 2026-02-27 00:53:37.179666 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-02-27 00:53:37.179676 | orchestrator | Friday 27 February 2026 00:48:54 +0000 (0:00:01.483) 0:00:19.847 ******* 2026-02-27 00:53:37.179685 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:53:37.179695 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:53:37.179704 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:53:37.179714 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:53:37.179723 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:53:37.179733 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:53:37.179742 | orchestrator | 2026-02-27 00:53:37.179752 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-02-27 00:53:37.179763 | orchestrator | Friday 27 February 2026 00:48:57 +0000 (0:00:02.575) 0:00:22.422 ******* 2026-02-27 00:53:37.179772 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:53:37.179782 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:53:37.179791 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:53:37.179801 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:53:37.179810 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:53:37.179820 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:53:37.179829 | orchestrator | 2026-02-27 00:53:37.179839 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-02-27 00:53:37.179849 | orchestrator | Friday 27 February 2026 00:48:58 +0000 (0:00:01.328) 0:00:23.751 ******* 2026-02-27 00:53:37.179858 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-02-27 00:53:37.179868 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-02-27 00:53:37.179884 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:53:37.179893 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-02-27 00:53:37.179903 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-02-27 00:53:37.179913 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:53:37.179922 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-02-27 00:53:37.179932 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-02-27 00:53:37.179941 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:53:37.179951 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-02-27 00:53:37.179960 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-02-27 00:53:37.179970 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:53:37.179980 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-02-27 00:53:37.179990 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-02-27 00:53:37.179999 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:53:37.180009 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-02-27 00:53:37.180018 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-02-27 00:53:37.180028 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:53:37.180037 | orchestrator | 2026-02-27 00:53:37.180047 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-02-27 00:53:37.180063 | orchestrator | Friday 27 February 2026 00:49:01 +0000 (0:00:02.794) 0:00:26.545 ******* 2026-02-27 00:53:37.180073 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:53:37.180083 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:53:37.180102 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:53:37.180112 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:53:37.180121 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:53:37.180131 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:53:37.180157 | orchestrator | 2026-02-27 00:53:37.180172 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-02-27 00:53:37.180182 | orchestrator | Friday 27 February 2026 00:49:03 +0000 (0:00:01.691) 0:00:28.237 ******* 2026-02-27 00:53:37.180192 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:53:37.180201 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:53:37.180211 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:53:37.180220 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:53:37.180230 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:53:37.180239 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:53:37.180249 | orchestrator | 2026-02-27 00:53:37.180258 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-02-27 00:53:37.180268 | orchestrator | 2026-02-27 00:53:37.180278 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-02-27 00:53:37.180287 | orchestrator | Friday 27 February 2026 00:49:07 +0000 (0:00:04.026) 0:00:32.263 ******* 2026-02-27 00:53:37.180297 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:53:37.180306 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:53:37.180316 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:53:37.180326 | orchestrator | 2026-02-27 00:53:37.180335 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-02-27 00:53:37.180345 | orchestrator | Friday 27 February 2026 00:49:11 +0000 (0:00:04.095) 0:00:36.359 ******* 2026-02-27 00:53:37.180355 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:53:37.180364 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:53:37.180374 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:53:37.180383 | orchestrator | 2026-02-27 00:53:37.180393 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-02-27 00:53:37.180403 | orchestrator | Friday 27 February 2026 00:49:13 +0000 (0:00:01.967) 0:00:38.327 ******* 2026-02-27 00:53:37.180412 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:53:37.180422 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:53:37.180431 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:53:37.180441 | orchestrator | 2026-02-27 00:53:37.180457 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-02-27 00:53:37.180466 | orchestrator | Friday 27 February 2026 00:49:14 +0000 (0:00:01.249) 0:00:39.577 ******* 2026-02-27 00:53:37.180476 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:53:37.180486 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:53:37.180495 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:53:37.180504 | orchestrator | 2026-02-27 00:53:37.180514 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-02-27 00:53:37.180523 | orchestrator | Friday 27 February 2026 00:49:15 +0000 (0:00:01.417) 0:00:40.994 ******* 2026-02-27 00:53:37.180533 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:53:37.180543 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:53:37.180552 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:53:37.180562 | orchestrator | 2026-02-27 00:53:37.180572 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-02-27 00:53:37.180581 | orchestrator | Friday 27 February 2026 00:49:16 +0000 (0:00:00.612) 0:00:41.607 ******* 2026-02-27 00:53:37.180591 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:53:37.180600 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:53:37.180610 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:53:37.180619 | orchestrator | 2026-02-27 00:53:37.180629 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-02-27 00:53:37.180639 | orchestrator | Friday 27 February 2026 00:49:18 +0000 (0:00:01.779) 0:00:43.386 ******* 2026-02-27 00:53:37.180648 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:53:37.180658 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:53:37.180668 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:53:37.180677 | orchestrator | 2026-02-27 00:53:37.180687 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-02-27 00:53:37.180696 | orchestrator | Friday 27 February 2026 00:49:20 +0000 (0:00:01.976) 0:00:45.362 ******* 2026-02-27 00:53:37.180706 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:53:37.180716 | orchestrator | 2026-02-27 00:53:37.180725 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-02-27 00:53:37.180735 | orchestrator | Friday 27 February 2026 00:49:20 +0000 (0:00:00.750) 0:00:46.113 ******* 2026-02-27 00:53:37.180745 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:53:37.180755 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:53:37.180764 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:53:37.180774 | orchestrator | 2026-02-27 00:53:37.180783 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-02-27 00:53:37.180793 | orchestrator | Friday 27 February 2026 00:49:23 +0000 (0:00:03.105) 0:00:49.218 ******* 2026-02-27 00:53:37.180803 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:53:37.180812 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:53:37.180822 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:53:37.180832 | orchestrator | 2026-02-27 00:53:37.180841 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-02-27 00:53:37.180851 | orchestrator | Friday 27 February 2026 00:49:25 +0000 (0:00:01.086) 0:00:50.307 ******* 2026-02-27 00:53:37.180861 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:53:37.180870 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:53:37.180880 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:53:37.180889 | orchestrator | 2026-02-27 00:53:37.180899 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-02-27 00:53:37.180909 | orchestrator | Friday 27 February 2026 00:49:26 +0000 (0:00:01.530) 0:00:51.838 ******* 2026-02-27 00:53:37.180918 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:53:37.180928 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:53:37.180937 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:53:37.180947 | orchestrator | 2026-02-27 00:53:37.180957 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-02-27 00:53:37.180972 | orchestrator | Friday 27 February 2026 00:49:28 +0000 (0:00:02.123) 0:00:53.961 ******* 2026-02-27 00:53:37.180987 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:53:37.180997 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:53:37.181007 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:53:37.181017 | orchestrator | 2026-02-27 00:53:37.181026 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-02-27 00:53:37.181040 | orchestrator | Friday 27 February 2026 00:49:30 +0000 (0:00:02.219) 0:00:56.180 ******* 2026-02-27 00:53:37.181050 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:53:37.181060 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:53:37.181069 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:53:37.181079 | orchestrator | 2026-02-27 00:53:37.181088 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-02-27 00:53:37.181098 | orchestrator | Friday 27 February 2026 00:49:31 +0000 (0:00:00.712) 0:00:56.892 ******* 2026-02-27 00:53:37.181107 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:53:37.181117 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:53:37.181126 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:53:37.181136 | orchestrator | 2026-02-27 00:53:37.181218 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-02-27 00:53:37.181228 | orchestrator | Friday 27 February 2026 00:49:33 +0000 (0:00:01.765) 0:00:58.658 ******* 2026-02-27 00:53:37.181238 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:53:37.181247 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:53:37.181257 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:53:37.181266 | orchestrator | 2026-02-27 00:53:37.181276 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-02-27 00:53:37.181286 | orchestrator | Friday 27 February 2026 00:49:35 +0000 (0:00:02.398) 0:01:01.057 ******* 2026-02-27 00:53:37.181296 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:53:37.181305 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:53:37.181315 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:53:37.181325 | orchestrator | 2026-02-27 00:53:37.181334 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-02-27 00:53:37.181344 | orchestrator | Friday 27 February 2026 00:49:36 +0000 (0:00:00.963) 0:01:02.021 ******* 2026-02-27 00:53:37.181354 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-27 00:53:37.181365 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-27 00:53:37.181374 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-27 00:53:37.181384 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-27 00:53:37.181394 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-27 00:53:37.181403 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-27 00:53:37.181413 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-27 00:53:37.181422 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-27 00:53:37.181432 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-27 00:53:37.181441 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-27 00:53:37.181451 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-27 00:53:37.181467 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-27 00:53:37.181476 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:53:37.181486 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:53:37.181496 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:53:37.181505 | orchestrator | 2026-02-27 00:53:37.181516 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-02-27 00:53:37.181525 | orchestrator | Friday 27 February 2026 00:50:20 +0000 (0:00:43.451) 0:01:45.472 ******* 2026-02-27 00:53:37.181535 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:53:37.181545 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:53:37.181554 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:53:37.181564 | orchestrator | 2026-02-27 00:53:37.181574 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-02-27 00:53:37.181583 | orchestrator | Friday 27 February 2026 00:50:21 +0000 (0:00:00.772) 0:01:46.245 ******* 2026-02-27 00:53:37.181593 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:53:37.181603 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:53:37.181612 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:53:37.181622 | orchestrator | 2026-02-27 00:53:37.181632 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-02-27 00:53:37.181641 | orchestrator | Friday 27 February 2026 00:50:22 +0000 (0:00:01.516) 0:01:47.761 ******* 2026-02-27 00:53:37.181651 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:53:37.181661 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:53:37.181670 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:53:37.181680 | orchestrator | 2026-02-27 00:53:37.181695 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-02-27 00:53:37.181705 | orchestrator | Friday 27 February 2026 00:50:25 +0000 (0:00:02.780) 0:01:50.542 ******* 2026-02-27 00:53:37.181715 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:53:37.181724 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:53:37.181734 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:53:37.181744 | orchestrator | 2026-02-27 00:53:37.181754 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-02-27 00:53:37.181764 | orchestrator | Friday 27 February 2026 00:50:53 +0000 (0:00:28.046) 0:02:18.588 ******* 2026-02-27 00:53:37.181773 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:53:37.181783 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:53:37.181793 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:53:37.181802 | orchestrator | 2026-02-27 00:53:37.181812 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-02-27 00:53:37.181821 | orchestrator | Friday 27 February 2026 00:50:53 +0000 (0:00:00.587) 0:02:19.176 ******* 2026-02-27 00:53:37.181831 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:53:37.181841 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:53:37.181850 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:53:37.181860 | orchestrator | 2026-02-27 00:53:37.181869 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-02-27 00:53:37.181879 | orchestrator | Friday 27 February 2026 00:50:54 +0000 (0:00:00.625) 0:02:19.802 ******* 2026-02-27 00:53:37.181889 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:53:37.181898 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:53:37.181908 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:53:37.181917 | orchestrator | 2026-02-27 00:53:37.181927 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-02-27 00:53:37.181936 | orchestrator | Friday 27 February 2026 00:50:55 +0000 (0:00:00.608) 0:02:20.410 ******* 2026-02-27 00:53:37.181946 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:53:37.181955 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:53:37.181965 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:53:37.181975 | orchestrator | 2026-02-27 00:53:37.181985 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-02-27 00:53:37.182000 | orchestrator | Friday 27 February 2026 00:50:56 +0000 (0:00:00.823) 0:02:21.233 ******* 2026-02-27 00:53:37.182009 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:53:37.182065 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:53:37.182075 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:53:37.182085 | orchestrator | 2026-02-27 00:53:37.182094 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-02-27 00:53:37.182104 | orchestrator | Friday 27 February 2026 00:50:56 +0000 (0:00:00.315) 0:02:21.549 ******* 2026-02-27 00:53:37.182114 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:53:37.182123 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:53:37.182133 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:53:37.182160 | orchestrator | 2026-02-27 00:53:37.182170 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-02-27 00:53:37.182179 | orchestrator | Friday 27 February 2026 00:50:56 +0000 (0:00:00.648) 0:02:22.197 ******* 2026-02-27 00:53:37.182189 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:53:37.182199 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:53:37.182208 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:53:37.182218 | orchestrator | 2026-02-27 00:53:37.182228 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-02-27 00:53:37.182238 | orchestrator | Friday 27 February 2026 00:50:57 +0000 (0:00:00.789) 0:02:22.987 ******* 2026-02-27 00:53:37.182247 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:53:37.182257 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:53:37.182266 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:53:37.182276 | orchestrator | 2026-02-27 00:53:37.182286 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-02-27 00:53:37.182295 | orchestrator | Friday 27 February 2026 00:50:58 +0000 (0:00:01.218) 0:02:24.205 ******* 2026-02-27 00:53:37.182305 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:53:37.182315 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:53:37.182324 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:53:37.182334 | orchestrator | 2026-02-27 00:53:37.182343 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-02-27 00:53:37.182353 | orchestrator | Friday 27 February 2026 00:50:59 +0000 (0:00:00.787) 0:02:24.993 ******* 2026-02-27 00:53:37.182362 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:53:37.182372 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:53:37.182382 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:53:37.182391 | orchestrator | 2026-02-27 00:53:37.182401 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-02-27 00:53:37.182410 | orchestrator | Friday 27 February 2026 00:51:00 +0000 (0:00:00.381) 0:02:25.375 ******* 2026-02-27 00:53:37.182429 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:53:37.182439 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:53:37.182448 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:53:37.182458 | orchestrator | 2026-02-27 00:53:37.182468 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-02-27 00:53:37.182477 | orchestrator | Friday 27 February 2026 00:51:00 +0000 (0:00:00.599) 0:02:25.974 ******* 2026-02-27 00:53:37.182487 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:53:37.182497 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:53:37.182506 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:53:37.182516 | orchestrator | 2026-02-27 00:53:37.182525 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-02-27 00:53:37.182535 | orchestrator | Friday 27 February 2026 00:51:01 +0000 (0:00:01.104) 0:02:27.079 ******* 2026-02-27 00:53:37.182545 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:53:37.182554 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:53:37.182564 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:53:37.182574 | orchestrator | 2026-02-27 00:53:37.182584 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-02-27 00:53:37.182601 | orchestrator | Friday 27 February 2026 00:51:02 +0000 (0:00:00.649) 0:02:27.729 ******* 2026-02-27 00:53:37.183288 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-27 00:53:37.183329 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-27 00:53:37.183340 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-27 00:53:37.183350 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-27 00:53:37.183360 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-27 00:53:37.183370 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-27 00:53:37.183385 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-27 00:53:37.183394 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-27 00:53:37.183404 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-27 00:53:37.183414 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-02-27 00:53:37.183423 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-27 00:53:37.183433 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-27 00:53:37.183442 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-02-27 00:53:37.183452 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-27 00:53:37.183461 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-27 00:53:37.183471 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-27 00:53:37.183480 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-27 00:53:37.183490 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-27 00:53:37.183500 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-27 00:53:37.183509 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-27 00:53:37.183519 | orchestrator | 2026-02-27 00:53:37.183528 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-02-27 00:53:37.183538 | orchestrator | 2026-02-27 00:53:37.183548 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-02-27 00:53:37.183557 | orchestrator | Friday 27 February 2026 00:51:06 +0000 (0:00:03.510) 0:02:31.240 ******* 2026-02-27 00:53:37.183567 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:53:37.183577 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:53:37.183586 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:53:37.183596 | orchestrator | 2026-02-27 00:53:37.183606 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-02-27 00:53:37.183615 | orchestrator | Friday 27 February 2026 00:51:06 +0000 (0:00:00.757) 0:02:31.997 ******* 2026-02-27 00:53:37.183625 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:53:37.183634 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:53:37.183644 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:53:37.183653 | orchestrator | 2026-02-27 00:53:37.183663 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-02-27 00:53:37.183672 | orchestrator | Friday 27 February 2026 00:51:07 +0000 (0:00:00.823) 0:02:32.821 ******* 2026-02-27 00:53:37.183682 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:53:37.183691 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:53:37.183701 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:53:37.183719 | orchestrator | 2026-02-27 00:53:37.183729 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-02-27 00:53:37.183738 | orchestrator | Friday 27 February 2026 00:51:08 +0000 (0:00:00.446) 0:02:33.268 ******* 2026-02-27 00:53:37.183748 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-27 00:53:37.183758 | orchestrator | 2026-02-27 00:53:37.183768 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-02-27 00:53:37.183777 | orchestrator | Friday 27 February 2026 00:51:08 +0000 (0:00:00.821) 0:02:34.090 ******* 2026-02-27 00:53:37.183787 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:53:37.183797 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:53:37.183807 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:53:37.183816 | orchestrator | 2026-02-27 00:53:37.183825 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-02-27 00:53:37.183835 | orchestrator | Friday 27 February 2026 00:51:09 +0000 (0:00:00.355) 0:02:34.445 ******* 2026-02-27 00:53:37.183844 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:53:37.183854 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:53:37.183863 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:53:37.183873 | orchestrator | 2026-02-27 00:53:37.183882 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-02-27 00:53:37.183892 | orchestrator | Friday 27 February 2026 00:51:09 +0000 (0:00:00.411) 0:02:34.856 ******* 2026-02-27 00:53:37.183902 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:53:37.183911 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:53:37.183921 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:53:37.183930 | orchestrator | 2026-02-27 00:53:37.183940 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-02-27 00:53:37.183949 | orchestrator | Friday 27 February 2026 00:51:10 +0000 (0:00:00.486) 0:02:35.343 ******* 2026-02-27 00:53:37.183959 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:53:37.183969 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:53:37.183978 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:53:37.183988 | orchestrator | 2026-02-27 00:53:37.184004 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-02-27 00:53:37.184014 | orchestrator | Friday 27 February 2026 00:51:11 +0000 (0:00:01.220) 0:02:36.564 ******* 2026-02-27 00:53:37.184024 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:53:37.184033 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:53:37.184043 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:53:37.184053 | orchestrator | 2026-02-27 00:53:37.184062 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-02-27 00:53:37.184072 | orchestrator | Friday 27 February 2026 00:51:12 +0000 (0:00:01.227) 0:02:37.791 ******* 2026-02-27 00:53:37.184082 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:53:37.184091 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:53:37.184105 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:53:37.184115 | orchestrator | 2026-02-27 00:53:37.184125 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-02-27 00:53:37.184134 | orchestrator | Friday 27 February 2026 00:51:13 +0000 (0:00:01.364) 0:02:39.156 ******* 2026-02-27 00:53:37.184202 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:53:37.184212 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:53:37.184222 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:53:37.184231 | orchestrator | 2026-02-27 00:53:37.184241 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-02-27 00:53:37.184251 | orchestrator | 2026-02-27 00:53:37.184260 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-02-27 00:53:37.184270 | orchestrator | Friday 27 February 2026 00:51:24 +0000 (0:00:10.955) 0:02:50.111 ******* 2026-02-27 00:53:37.184279 | orchestrator | ok: [testbed-manager] 2026-02-27 00:53:37.184289 | orchestrator | 2026-02-27 00:53:37.184299 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-02-27 00:53:37.184315 | orchestrator | Friday 27 February 2026 00:51:25 +0000 (0:00:01.060) 0:02:51.172 ******* 2026-02-27 00:53:37.184325 | orchestrator | changed: [testbed-manager] 2026-02-27 00:53:37.184335 | orchestrator | 2026-02-27 00:53:37.184344 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-27 00:53:37.184354 | orchestrator | Friday 27 February 2026 00:51:26 +0000 (0:00:00.673) 0:02:51.845 ******* 2026-02-27 00:53:37.184363 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-27 00:53:37.184373 | orchestrator | 2026-02-27 00:53:37.184382 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-27 00:53:37.184392 | orchestrator | Friday 27 February 2026 00:51:27 +0000 (0:00:00.654) 0:02:52.499 ******* 2026-02-27 00:53:37.184401 | orchestrator | changed: [testbed-manager] 2026-02-27 00:53:37.184411 | orchestrator | 2026-02-27 00:53:37.184420 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-02-27 00:53:37.184430 | orchestrator | Friday 27 February 2026 00:51:28 +0000 (0:00:01.013) 0:02:53.513 ******* 2026-02-27 00:53:37.184439 | orchestrator | changed: [testbed-manager] 2026-02-27 00:53:37.184449 | orchestrator | 2026-02-27 00:53:37.184458 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-02-27 00:53:37.184468 | orchestrator | Friday 27 February 2026 00:51:28 +0000 (0:00:00.649) 0:02:54.163 ******* 2026-02-27 00:53:37.184477 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-27 00:53:37.184487 | orchestrator | 2026-02-27 00:53:37.184496 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-02-27 00:53:37.184506 | orchestrator | Friday 27 February 2026 00:51:31 +0000 (0:00:02.120) 0:02:56.283 ******* 2026-02-27 00:53:37.184515 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-27 00:53:37.184525 | orchestrator | 2026-02-27 00:53:37.184534 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-02-27 00:53:37.184543 | orchestrator | Friday 27 February 2026 00:51:32 +0000 (0:00:01.011) 0:02:57.295 ******* 2026-02-27 00:53:37.184553 | orchestrator | changed: [testbed-manager] 2026-02-27 00:53:37.184563 | orchestrator | 2026-02-27 00:53:37.184572 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-02-27 00:53:37.184597 | orchestrator | Friday 27 February 2026 00:51:32 +0000 (0:00:00.856) 0:02:58.152 ******* 2026-02-27 00:53:37.184607 | orchestrator | changed: [testbed-manager] 2026-02-27 00:53:37.184627 | orchestrator | 2026-02-27 00:53:37.184636 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-02-27 00:53:37.184646 | orchestrator | 2026-02-27 00:53:37.184656 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-02-27 00:53:37.184665 | orchestrator | Friday 27 February 2026 00:51:33 +0000 (0:00:00.503) 0:02:58.658 ******* 2026-02-27 00:53:37.184675 | orchestrator | ok: [testbed-manager] 2026-02-27 00:53:37.184685 | orchestrator | 2026-02-27 00:53:37.184694 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-02-27 00:53:37.184704 | orchestrator | Friday 27 February 2026 00:51:33 +0000 (0:00:00.190) 0:02:58.848 ******* 2026-02-27 00:53:37.184714 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-02-27 00:53:37.184723 | orchestrator | 2026-02-27 00:53:37.184733 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-02-27 00:53:37.184743 | orchestrator | Friday 27 February 2026 00:51:34 +0000 (0:00:00.408) 0:02:59.256 ******* 2026-02-27 00:53:37.184753 | orchestrator | ok: [testbed-manager] 2026-02-27 00:53:37.184762 | orchestrator | 2026-02-27 00:53:37.184772 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-02-27 00:53:37.184781 | orchestrator | Friday 27 February 2026 00:51:35 +0000 (0:00:01.058) 0:03:00.314 ******* 2026-02-27 00:53:37.184791 | orchestrator | ok: [testbed-manager] 2026-02-27 00:53:37.184801 | orchestrator | 2026-02-27 00:53:37.184810 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-02-27 00:53:37.184826 | orchestrator | Friday 27 February 2026 00:51:36 +0000 (0:00:01.809) 0:03:02.124 ******* 2026-02-27 00:53:37.184836 | orchestrator | changed: [testbed-manager] 2026-02-27 00:53:37.184846 | orchestrator | 2026-02-27 00:53:37.184855 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-02-27 00:53:37.184865 | orchestrator | Friday 27 February 2026 00:51:37 +0000 (0:00:00.933) 0:03:03.058 ******* 2026-02-27 00:53:37.184875 | orchestrator | ok: [testbed-manager] 2026-02-27 00:53:37.184884 | orchestrator | 2026-02-27 00:53:37.184900 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-02-27 00:53:37.184911 | orchestrator | Friday 27 February 2026 00:51:38 +0000 (0:00:00.613) 0:03:03.672 ******* 2026-02-27 00:53:37.184921 | orchestrator | changed: [testbed-manager] 2026-02-27 00:53:37.184931 | orchestrator | 2026-02-27 00:53:37.184940 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-02-27 00:53:37.184950 | orchestrator | Friday 27 February 2026 00:51:49 +0000 (0:00:10.593) 0:03:14.266 ******* 2026-02-27 00:53:37.184959 | orchestrator | changed: [testbed-manager] 2026-02-27 00:53:37.184969 | orchestrator | 2026-02-27 00:53:37.184979 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-02-27 00:53:37.184994 | orchestrator | Friday 27 February 2026 00:52:06 +0000 (0:00:17.884) 0:03:32.150 ******* 2026-02-27 00:53:37.185004 | orchestrator | ok: [testbed-manager] 2026-02-27 00:53:37.185014 | orchestrator | 2026-02-27 00:53:37.185023 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-02-27 00:53:37.185033 | orchestrator | 2026-02-27 00:53:37.185043 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-02-27 00:53:37.185052 | orchestrator | Friday 27 February 2026 00:52:07 +0000 (0:00:00.551) 0:03:32.702 ******* 2026-02-27 00:53:37.185062 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:53:37.185072 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:53:37.185081 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:53:37.185091 | orchestrator | 2026-02-27 00:53:37.185101 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-02-27 00:53:37.185110 | orchestrator | Friday 27 February 2026 00:52:07 +0000 (0:00:00.279) 0:03:32.982 ******* 2026-02-27 00:53:37.185120 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:53:37.185129 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:53:37.185155 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:53:37.185166 | orchestrator | 2026-02-27 00:53:37.185175 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-02-27 00:53:37.185185 | orchestrator | Friday 27 February 2026 00:52:08 +0000 (0:00:00.317) 0:03:33.299 ******* 2026-02-27 00:53:37.185195 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:53:37.185204 | orchestrator | 2026-02-27 00:53:37.185214 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-02-27 00:53:37.185223 | orchestrator | Friday 27 February 2026 00:52:08 +0000 (0:00:00.674) 0:03:33.973 ******* 2026-02-27 00:53:37.185233 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-27 00:53:37.185243 | orchestrator | 2026-02-27 00:53:37.185252 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-02-27 00:53:37.185262 | orchestrator | Friday 27 February 2026 00:52:09 +0000 (0:00:00.762) 0:03:34.736 ******* 2026-02-27 00:53:37.185272 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-27 00:53:37.185282 | orchestrator | 2026-02-27 00:53:37.185291 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-02-27 00:53:37.185301 | orchestrator | Friday 27 February 2026 00:52:10 +0000 (0:00:00.962) 0:03:35.699 ******* 2026-02-27 00:53:37.185310 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:53:37.185320 | orchestrator | 2026-02-27 00:53:37.185330 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-02-27 00:53:37.185339 | orchestrator | Friday 27 February 2026 00:52:10 +0000 (0:00:00.181) 0:03:35.881 ******* 2026-02-27 00:53:37.185356 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-27 00:53:37.185366 | orchestrator | 2026-02-27 00:53:37.185375 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-02-27 00:53:37.185385 | orchestrator | Friday 27 February 2026 00:52:11 +0000 (0:00:01.110) 0:03:36.991 ******* 2026-02-27 00:53:37.185395 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:53:37.185404 | orchestrator | 2026-02-27 00:53:37.185414 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-02-27 00:53:37.185423 | orchestrator | Friday 27 February 2026 00:52:11 +0000 (0:00:00.133) 0:03:37.125 ******* 2026-02-27 00:53:37.185433 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:53:37.185442 | orchestrator | 2026-02-27 00:53:37.185452 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-02-27 00:53:37.185461 | orchestrator | Friday 27 February 2026 00:52:12 +0000 (0:00:00.131) 0:03:37.256 ******* 2026-02-27 00:53:37.185471 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:53:37.185480 | orchestrator | 2026-02-27 00:53:37.185490 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-02-27 00:53:37.185499 | orchestrator | Friday 27 February 2026 00:52:12 +0000 (0:00:00.146) 0:03:37.402 ******* 2026-02-27 00:53:37.185509 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:53:37.185519 | orchestrator | 2026-02-27 00:53:37.185528 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-02-27 00:53:37.185538 | orchestrator | Friday 27 February 2026 00:52:12 +0000 (0:00:00.127) 0:03:37.529 ******* 2026-02-27 00:53:37.185547 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-27 00:53:37.185557 | orchestrator | 2026-02-27 00:53:37.185566 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-02-27 00:53:37.185576 | orchestrator | Friday 27 February 2026 00:52:18 +0000 (0:00:06.037) 0:03:43.567 ******* 2026-02-27 00:53:37.185585 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-02-27 00:53:37.185595 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-02-27 00:53:37.185604 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-02-27 00:53:37.185614 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-02-27 00:53:37.185623 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-02-27 00:53:37.185632 | orchestrator | 2026-02-27 00:53:37.185642 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-02-27 00:53:37.185652 | orchestrator | Friday 27 February 2026 00:53:01 +0000 (0:00:42.711) 0:04:26.279 ******* 2026-02-27 00:53:37.185667 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-27 00:53:37.185677 | orchestrator | 2026-02-27 00:53:37.185686 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-02-27 00:53:37.185696 | orchestrator | Friday 27 February 2026 00:53:02 +0000 (0:00:01.370) 0:04:27.650 ******* 2026-02-27 00:53:37.185705 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-27 00:53:37.185715 | orchestrator | 2026-02-27 00:53:37.185724 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-02-27 00:53:37.185734 | orchestrator | Friday 27 February 2026 00:53:04 +0000 (0:00:01.690) 0:04:29.340 ******* 2026-02-27 00:53:37.185743 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-27 00:53:37.185753 | orchestrator | 2026-02-27 00:53:37.185767 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-02-27 00:53:37.185777 | orchestrator | Friday 27 February 2026 00:53:05 +0000 (0:00:01.172) 0:04:30.513 ******* 2026-02-27 00:53:37.185786 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:53:37.185796 | orchestrator | 2026-02-27 00:53:37.185806 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-02-27 00:53:37.185815 | orchestrator | Friday 27 February 2026 00:53:05 +0000 (0:00:00.127) 0:04:30.641 ******* 2026-02-27 00:53:37.185825 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-02-27 00:53:37.185840 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-02-27 00:53:37.185850 | orchestrator | 2026-02-27 00:53:37.185860 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-02-27 00:53:37.185869 | orchestrator | Friday 27 February 2026 00:53:08 +0000 (0:00:02.598) 0:04:33.239 ******* 2026-02-27 00:53:37.185879 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:53:37.185889 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:53:37.185898 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:53:37.185908 | orchestrator | 2026-02-27 00:53:37.185918 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-02-27 00:53:37.185927 | orchestrator | Friday 27 February 2026 00:53:08 +0000 (0:00:00.423) 0:04:33.662 ******* 2026-02-27 00:53:37.185937 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:53:37.185946 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:53:37.185956 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:53:37.185965 | orchestrator | 2026-02-27 00:53:37.185975 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-02-27 00:53:37.185984 | orchestrator | 2026-02-27 00:53:37.185994 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-02-27 00:53:37.186004 | orchestrator | Friday 27 February 2026 00:53:09 +0000 (0:00:01.361) 0:04:35.024 ******* 2026-02-27 00:53:37.186014 | orchestrator | ok: [testbed-manager] 2026-02-27 00:53:37.186086 | orchestrator | 2026-02-27 00:53:37.186096 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-02-27 00:53:37.186106 | orchestrator | Friday 27 February 2026 00:53:09 +0000 (0:00:00.137) 0:04:35.162 ******* 2026-02-27 00:53:37.186115 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-02-27 00:53:37.186125 | orchestrator | 2026-02-27 00:53:37.186134 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-02-27 00:53:37.186160 | orchestrator | Friday 27 February 2026 00:53:10 +0000 (0:00:00.242) 0:04:35.405 ******* 2026-02-27 00:53:37.186170 | orchestrator | changed: [testbed-manager] 2026-02-27 00:53:37.186180 | orchestrator | 2026-02-27 00:53:37.186189 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-02-27 00:53:37.186199 | orchestrator | 2026-02-27 00:53:37.186209 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-02-27 00:53:37.186218 | orchestrator | Friday 27 February 2026 00:53:15 +0000 (0:00:05.215) 0:04:40.621 ******* 2026-02-27 00:53:37.186228 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:53:37.186238 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:53:37.186247 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:53:37.186257 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:53:37.186266 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:53:37.186276 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:53:37.186285 | orchestrator | 2026-02-27 00:53:37.186295 | orchestrator | TASK [Manage labels] *********************************************************** 2026-02-27 00:53:37.186305 | orchestrator | Friday 27 February 2026 00:53:16 +0000 (0:00:00.925) 0:04:41.546 ******* 2026-02-27 00:53:37.186315 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-27 00:53:37.186325 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-27 00:53:37.186334 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-27 00:53:37.186344 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-27 00:53:37.186354 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-27 00:53:37.186364 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-27 00:53:37.186373 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-27 00:53:37.186391 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-27 00:53:37.186400 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-27 00:53:37.186410 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-27 00:53:37.186420 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-27 00:53:37.186429 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-27 00:53:37.186445 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-27 00:53:37.186456 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-27 00:53:37.186465 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-27 00:53:37.186475 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-27 00:53:37.186484 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-27 00:53:37.186494 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-27 00:53:37.186508 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-27 00:53:37.186519 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-27 00:53:37.186528 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-27 00:53:37.186538 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-27 00:53:37.186547 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-27 00:53:37.186557 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-27 00:53:37.186567 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-27 00:53:37.186577 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-27 00:53:37.186586 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-27 00:53:37.186596 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-27 00:53:37.186605 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-27 00:53:37.186615 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-27 00:53:37.186625 | orchestrator | 2026-02-27 00:53:37.186634 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-02-27 00:53:37.186643 | orchestrator | Friday 27 February 2026 00:53:33 +0000 (0:00:17.245) 0:04:58.792 ******* 2026-02-27 00:53:37.186653 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:53:37.186663 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:53:37.186673 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:53:37.186682 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:53:37.186692 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:53:37.186702 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:53:37.186711 | orchestrator | 2026-02-27 00:53:37.186721 | orchestrator | TASK [Manage taints] *********************************************************** 2026-02-27 00:53:37.186731 | orchestrator | Friday 27 February 2026 00:53:34 +0000 (0:00:00.774) 0:04:59.566 ******* 2026-02-27 00:53:37.186740 | orchestrator | skipping: [testbed-node-3] 2026-02-27 00:53:37.186750 | orchestrator | skipping: [testbed-node-4] 2026-02-27 00:53:37.186760 | orchestrator | skipping: [testbed-node-5] 2026-02-27 00:53:37.186769 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:53:37.186779 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:53:37.186788 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:53:37.186798 | orchestrator | 2026-02-27 00:53:37.186814 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 00:53:37.186824 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 00:53:37.186836 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-02-27 00:53:37.186847 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-27 00:53:37.186857 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-27 00:53:37.186867 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-27 00:53:37.186877 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-27 00:53:37.186886 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-27 00:53:37.186896 | orchestrator | 2026-02-27 00:53:37.186906 | orchestrator | 2026-02-27 00:53:37.186916 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 00:53:37.186926 | orchestrator | Friday 27 February 2026 00:53:34 +0000 (0:00:00.488) 0:05:00.055 ******* 2026-02-27 00:53:37.186936 | orchestrator | =============================================================================== 2026-02-27 00:53:37.186946 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.45s 2026-02-27 00:53:37.186956 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.71s 2026-02-27 00:53:37.186966 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 28.05s 2026-02-27 00:53:37.186980 | orchestrator | kubectl : Install required packages ------------------------------------ 17.88s 2026-02-27 00:53:37.186990 | orchestrator | Manage labels ---------------------------------------------------------- 17.25s 2026-02-27 00:53:37.187000 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.96s 2026-02-27 00:53:37.187009 | orchestrator | kubectl : Add repository Debian ---------------------------------------- 10.59s 2026-02-27 00:53:37.187019 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 6.04s 2026-02-27 00:53:37.187029 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.49s 2026-02-27 00:53:37.187043 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.22s 2026-02-27 00:53:37.187052 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 4.10s 2026-02-27 00:53:37.187062 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 4.03s 2026-02-27 00:53:37.187072 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.51s 2026-02-27 00:53:37.187082 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 3.11s 2026-02-27 00:53:37.187091 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 2.79s 2026-02-27 00:53:37.187101 | orchestrator | k3s_server : Copy K3s service file -------------------------------------- 2.78s 2026-02-27 00:53:37.187111 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.63s 2026-02-27 00:53:37.187121 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.60s 2026-02-27 00:53:37.187130 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.58s 2026-02-27 00:53:37.187180 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.40s 2026-02-27 00:53:37.187202 | orchestrator | 2026-02-27 00:53:37 | INFO  | Task 048e5920-5813-4866-85a0-9ce57b3b41a8 is in state STARTED 2026-02-27 00:53:37.187212 | orchestrator | 2026-02-27 00:53:37 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:53:40.217049 | orchestrator | 2026-02-27 00:53:40 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:53:40.220471 | orchestrator | 2026-02-27 00:53:40 | INFO  | Task d939b87b-4a2c-449f-8d64-a023a5e00b45 is in state STARTED 2026-02-27 00:53:40.222450 | orchestrator | 2026-02-27 00:53:40 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:53:40.224450 | orchestrator | 2026-02-27 00:53:40 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:53:40.226314 | orchestrator | 2026-02-27 00:53:40 | INFO  | Task 6b585c0e-0830-41d2-a6a4-15b16541ac8b is in state STARTED 2026-02-27 00:53:40.228245 | orchestrator | 2026-02-27 00:53:40 | INFO  | Task 048e5920-5813-4866-85a0-9ce57b3b41a8 is in state STARTED 2026-02-27 00:53:40.228304 | orchestrator | 2026-02-27 00:53:40 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:53:43.266619 | orchestrator | 2026-02-27 00:53:43 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:53:43.266732 | orchestrator | 2026-02-27 00:53:43 | INFO  | Task d939b87b-4a2c-449f-8d64-a023a5e00b45 is in state STARTED 2026-02-27 00:53:43.267404 | orchestrator | 2026-02-27 00:53:43 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:53:43.268418 | orchestrator | 2026-02-27 00:53:43 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:53:43.270737 | orchestrator | 2026-02-27 00:53:43 | INFO  | Task 6b585c0e-0830-41d2-a6a4-15b16541ac8b is in state STARTED 2026-02-27 00:53:43.272756 | orchestrator | 2026-02-27 00:53:43 | INFO  | Task 048e5920-5813-4866-85a0-9ce57b3b41a8 is in state STARTED 2026-02-27 00:53:43.272808 | orchestrator | 2026-02-27 00:53:43 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:53:46.317335 | orchestrator | 2026-02-27 00:53:46 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:53:46.318339 | orchestrator | 2026-02-27 00:53:46 | INFO  | Task d939b87b-4a2c-449f-8d64-a023a5e00b45 is in state STARTED 2026-02-27 00:53:46.319240 | orchestrator | 2026-02-27 00:53:46 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:53:46.321197 | orchestrator | 2026-02-27 00:53:46 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:53:46.322609 | orchestrator | 2026-02-27 00:53:46 | INFO  | Task 6b585c0e-0830-41d2-a6a4-15b16541ac8b is in state STARTED 2026-02-27 00:53:46.323240 | orchestrator | 2026-02-27 00:53:46 | INFO  | Task 048e5920-5813-4866-85a0-9ce57b3b41a8 is in state SUCCESS 2026-02-27 00:53:46.323613 | orchestrator | 2026-02-27 00:53:46 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:53:49.370409 | orchestrator | 2026-02-27 00:53:49 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:53:49.370580 | orchestrator | 2026-02-27 00:53:49 | INFO  | Task d939b87b-4a2c-449f-8d64-a023a5e00b45 is in state SUCCESS 2026-02-27 00:53:49.371379 | orchestrator | 2026-02-27 00:53:49 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:53:49.372312 | orchestrator | 2026-02-27 00:53:49 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:53:49.373016 | orchestrator | 2026-02-27 00:53:49 | INFO  | Task 6b585c0e-0830-41d2-a6a4-15b16541ac8b is in state STARTED 2026-02-27 00:53:49.373083 | orchestrator | 2026-02-27 00:53:49 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:53:52.407818 | orchestrator | 2026-02-27 00:53:52 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:53:52.407908 | orchestrator | 2026-02-27 00:53:52 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:53:52.409892 | orchestrator | 2026-02-27 00:53:52 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:53:52.410444 | orchestrator | 2026-02-27 00:53:52 | INFO  | Task 6b585c0e-0830-41d2-a6a4-15b16541ac8b is in state STARTED 2026-02-27 00:53:52.410646 | orchestrator | 2026-02-27 00:53:52 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:53:55.442623 | orchestrator | 2026-02-27 00:53:55 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:53:55.443499 | orchestrator | 2026-02-27 00:53:55 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:53:55.445338 | orchestrator | 2026-02-27 00:53:55 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:53:55.447940 | orchestrator | 2026-02-27 00:53:55 | INFO  | Task 6b585c0e-0830-41d2-a6a4-15b16541ac8b is in state STARTED 2026-02-27 00:53:55.447987 | orchestrator | 2026-02-27 00:53:55 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:53:58.490794 | orchestrator | 2026-02-27 00:53:58 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:53:58.493191 | orchestrator | 2026-02-27 00:53:58 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:53:58.495047 | orchestrator | 2026-02-27 00:53:58 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:53:58.496693 | orchestrator | 2026-02-27 00:53:58 | INFO  | Task 6b585c0e-0830-41d2-a6a4-15b16541ac8b is in state STARTED 2026-02-27 00:53:58.496734 | orchestrator | 2026-02-27 00:53:58 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:54:01.542792 | orchestrator | 2026-02-27 00:54:01 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:54:01.542931 | orchestrator | 2026-02-27 00:54:01 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:54:01.543510 | orchestrator | 2026-02-27 00:54:01 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:54:01.544045 | orchestrator | 2026-02-27 00:54:01 | INFO  | Task 6b585c0e-0830-41d2-a6a4-15b16541ac8b is in state STARTED 2026-02-27 00:54:01.544204 | orchestrator | 2026-02-27 00:54:01 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:54:04.574967 | orchestrator | 2026-02-27 00:54:04 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:54:04.576931 | orchestrator | 2026-02-27 00:54:04 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:54:04.578444 | orchestrator | 2026-02-27 00:54:04 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:54:04.580224 | orchestrator | 2026-02-27 00:54:04 | INFO  | Task 6b585c0e-0830-41d2-a6a4-15b16541ac8b is in state STARTED 2026-02-27 00:54:04.580948 | orchestrator | 2026-02-27 00:54:04 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:54:07.618460 | orchestrator | 2026-02-27 00:54:07 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:54:07.621360 | orchestrator | 2026-02-27 00:54:07 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:54:07.621415 | orchestrator | 2026-02-27 00:54:07 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:54:07.622983 | orchestrator | 2026-02-27 00:54:07 | INFO  | Task 6b585c0e-0830-41d2-a6a4-15b16541ac8b is in state STARTED 2026-02-27 00:54:07.623077 | orchestrator | 2026-02-27 00:54:07 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:54:10.668954 | orchestrator | 2026-02-27 00:54:10 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:54:10.671999 | orchestrator | 2026-02-27 00:54:10 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:54:10.675728 | orchestrator | 2026-02-27 00:54:10 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state STARTED 2026-02-27 00:54:10.678208 | orchestrator | 2026-02-27 00:54:10 | INFO  | Task 6b585c0e-0830-41d2-a6a4-15b16541ac8b is in state STARTED 2026-02-27 00:54:10.678304 | orchestrator | 2026-02-27 00:54:10 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:54:13.719154 | orchestrator | 2026-02-27 00:54:13 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:54:13.720916 | orchestrator | 2026-02-27 00:54:13 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:54:13.723424 | orchestrator | 2026-02-27 00:54:13 | INFO  | Task 71707bf1-a811-48f4-b038-8e614c7519ab is in state SUCCESS 2026-02-27 00:54:13.725026 | orchestrator | 2026-02-27 00:54:13.725072 | orchestrator | 2026-02-27 00:54:13.725084 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-02-27 00:54:13.725097 | orchestrator | 2026-02-27 00:54:13.725150 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-27 00:54:13.725164 | orchestrator | Friday 27 February 2026 00:53:40 +0000 (0:00:00.182) 0:00:00.182 ******* 2026-02-27 00:54:13.725176 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-27 00:54:13.725187 | orchestrator | 2026-02-27 00:54:13.725198 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-27 00:54:13.725208 | orchestrator | Friday 27 February 2026 00:53:41 +0000 (0:00:00.817) 0:00:01.000 ******* 2026-02-27 00:54:13.725219 | orchestrator | changed: [testbed-manager] 2026-02-27 00:54:13.725229 | orchestrator | 2026-02-27 00:54:13.725239 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-02-27 00:54:13.725249 | orchestrator | Friday 27 February 2026 00:53:43 +0000 (0:00:01.338) 0:00:02.338 ******* 2026-02-27 00:54:13.725259 | orchestrator | changed: [testbed-manager] 2026-02-27 00:54:13.725268 | orchestrator | 2026-02-27 00:54:13.725278 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 00:54:13.725287 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 00:54:13.725298 | orchestrator | 2026-02-27 00:54:13.725308 | orchestrator | 2026-02-27 00:54:13.725318 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 00:54:13.725329 | orchestrator | Friday 27 February 2026 00:53:43 +0000 (0:00:00.528) 0:00:02.867 ******* 2026-02-27 00:54:13.725339 | orchestrator | =============================================================================== 2026-02-27 00:54:13.725349 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.34s 2026-02-27 00:54:13.725359 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.82s 2026-02-27 00:54:13.725369 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.53s 2026-02-27 00:54:13.725379 | orchestrator | 2026-02-27 00:54:13.725388 | orchestrator | 2026-02-27 00:54:13.725399 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-02-27 00:54:13.725410 | orchestrator | 2026-02-27 00:54:13.725421 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-02-27 00:54:13.725430 | orchestrator | Friday 27 February 2026 00:53:40 +0000 (0:00:00.187) 0:00:00.187 ******* 2026-02-27 00:54:13.725468 | orchestrator | ok: [testbed-manager] 2026-02-27 00:54:13.725483 | orchestrator | 2026-02-27 00:54:13.725494 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-02-27 00:54:13.725505 | orchestrator | Friday 27 February 2026 00:53:41 +0000 (0:00:00.618) 0:00:00.806 ******* 2026-02-27 00:54:13.725515 | orchestrator | ok: [testbed-manager] 2026-02-27 00:54:13.725524 | orchestrator | 2026-02-27 00:54:13.725534 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-27 00:54:13.725543 | orchestrator | Friday 27 February 2026 00:53:41 +0000 (0:00:00.660) 0:00:01.466 ******* 2026-02-27 00:54:13.725553 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-27 00:54:13.725564 | orchestrator | 2026-02-27 00:54:13.725574 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-27 00:54:13.725585 | orchestrator | Friday 27 February 2026 00:53:42 +0000 (0:00:00.785) 0:00:02.251 ******* 2026-02-27 00:54:13.725594 | orchestrator | changed: [testbed-manager] 2026-02-27 00:54:13.725605 | orchestrator | 2026-02-27 00:54:13.725616 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-02-27 00:54:13.725626 | orchestrator | Friday 27 February 2026 00:53:44 +0000 (0:00:01.693) 0:00:03.945 ******* 2026-02-27 00:54:13.725636 | orchestrator | changed: [testbed-manager] 2026-02-27 00:54:13.725646 | orchestrator | 2026-02-27 00:54:13.725655 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-02-27 00:54:13.725665 | orchestrator | Friday 27 February 2026 00:53:44 +0000 (0:00:00.559) 0:00:04.504 ******* 2026-02-27 00:54:13.725676 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-27 00:54:13.725687 | orchestrator | 2026-02-27 00:54:13.725697 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-02-27 00:54:13.725706 | orchestrator | Friday 27 February 2026 00:53:46 +0000 (0:00:01.733) 0:00:06.237 ******* 2026-02-27 00:54:13.725716 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-27 00:54:13.725727 | orchestrator | 2026-02-27 00:54:13.725737 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-02-27 00:54:13.725747 | orchestrator | Friday 27 February 2026 00:53:47 +0000 (0:00:00.921) 0:00:07.158 ******* 2026-02-27 00:54:13.725757 | orchestrator | ok: [testbed-manager] 2026-02-27 00:54:13.725767 | orchestrator | 2026-02-27 00:54:13.725778 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-02-27 00:54:13.725788 | orchestrator | Friday 27 February 2026 00:53:47 +0000 (0:00:00.539) 0:00:07.698 ******* 2026-02-27 00:54:13.725798 | orchestrator | ok: [testbed-manager] 2026-02-27 00:54:13.725807 | orchestrator | 2026-02-27 00:54:13.725817 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 00:54:13.725827 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 00:54:13.725839 | orchestrator | 2026-02-27 00:54:13.725849 | orchestrator | 2026-02-27 00:54:13.725874 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 00:54:13.725885 | orchestrator | Friday 27 February 2026 00:53:48 +0000 (0:00:00.341) 0:00:08.040 ******* 2026-02-27 00:54:13.725894 | orchestrator | =============================================================================== 2026-02-27 00:54:13.725905 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.73s 2026-02-27 00:54:13.725915 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.69s 2026-02-27 00:54:13.725925 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.92s 2026-02-27 00:54:13.725951 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.79s 2026-02-27 00:54:13.725962 | orchestrator | Create .kube directory -------------------------------------------------- 0.66s 2026-02-27 00:54:13.725971 | orchestrator | Get home directory of operator user ------------------------------------- 0.62s 2026-02-27 00:54:13.725981 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.56s 2026-02-27 00:54:13.726001 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.54s 2026-02-27 00:54:13.726011 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.34s 2026-02-27 00:54:13.726066 | orchestrator | 2026-02-27 00:54:13.726077 | orchestrator | 2026-02-27 00:54:13.726087 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-02-27 00:54:13.726097 | orchestrator | 2026-02-27 00:54:13.726123 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-02-27 00:54:13.726134 | orchestrator | Friday 27 February 2026 00:51:34 +0000 (0:00:00.182) 0:00:00.182 ******* 2026-02-27 00:54:13.726143 | orchestrator | ok: [localhost] => { 2026-02-27 00:54:13.726154 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-02-27 00:54:13.726165 | orchestrator | } 2026-02-27 00:54:13.726175 | orchestrator | 2026-02-27 00:54:13.726185 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-02-27 00:54:13.726194 | orchestrator | Friday 27 February 2026 00:51:34 +0000 (0:00:00.107) 0:00:00.289 ******* 2026-02-27 00:54:13.726205 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-02-27 00:54:13.726216 | orchestrator | ...ignoring 2026-02-27 00:54:13.726226 | orchestrator | 2026-02-27 00:54:13.726235 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-02-27 00:54:13.726245 | orchestrator | Friday 27 February 2026 00:51:38 +0000 (0:00:04.063) 0:00:04.352 ******* 2026-02-27 00:54:13.726255 | orchestrator | skipping: [localhost] 2026-02-27 00:54:13.726265 | orchestrator | 2026-02-27 00:54:13.726274 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-02-27 00:54:13.726283 | orchestrator | Friday 27 February 2026 00:51:38 +0000 (0:00:00.087) 0:00:04.440 ******* 2026-02-27 00:54:13.726293 | orchestrator | ok: [localhost] 2026-02-27 00:54:13.726303 | orchestrator | 2026-02-27 00:54:13.726312 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-27 00:54:13.726322 | orchestrator | 2026-02-27 00:54:13.726332 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-27 00:54:13.726342 | orchestrator | Friday 27 February 2026 00:51:39 +0000 (0:00:00.171) 0:00:04.612 ******* 2026-02-27 00:54:13.726352 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:54:13.726362 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:54:13.726371 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:54:13.726381 | orchestrator | 2026-02-27 00:54:13.726390 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-27 00:54:13.726400 | orchestrator | Friday 27 February 2026 00:51:39 +0000 (0:00:00.753) 0:00:05.366 ******* 2026-02-27 00:54:13.726409 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-02-27 00:54:13.726419 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-02-27 00:54:13.726429 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-02-27 00:54:13.726438 | orchestrator | 2026-02-27 00:54:13.726446 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-02-27 00:54:13.726454 | orchestrator | 2026-02-27 00:54:13.726463 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-27 00:54:13.726472 | orchestrator | Friday 27 February 2026 00:51:42 +0000 (0:00:02.535) 0:00:07.902 ******* 2026-02-27 00:54:13.726481 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:54:13.726490 | orchestrator | 2026-02-27 00:54:13.726499 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-27 00:54:13.726508 | orchestrator | Friday 27 February 2026 00:51:43 +0000 (0:00:01.302) 0:00:09.204 ******* 2026-02-27 00:54:13.726517 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:54:13.726525 | orchestrator | 2026-02-27 00:54:13.726533 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-02-27 00:54:13.726550 | orchestrator | Friday 27 February 2026 00:51:45 +0000 (0:00:01.754) 0:00:10.959 ******* 2026-02-27 00:54:13.726559 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:54:13.726621 | orchestrator | 2026-02-27 00:54:13.726633 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-02-27 00:54:13.726643 | orchestrator | Friday 27 February 2026 00:51:46 +0000 (0:00:00.601) 0:00:11.560 ******* 2026-02-27 00:54:13.726652 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:54:13.726661 | orchestrator | 2026-02-27 00:54:13.726672 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-02-27 00:54:13.726681 | orchestrator | Friday 27 February 2026 00:51:46 +0000 (0:00:00.466) 0:00:12.027 ******* 2026-02-27 00:54:13.726690 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:54:13.726699 | orchestrator | 2026-02-27 00:54:13.726708 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-02-27 00:54:13.726719 | orchestrator | Friday 27 February 2026 00:51:46 +0000 (0:00:00.415) 0:00:12.443 ******* 2026-02-27 00:54:13.726737 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:54:13.726748 | orchestrator | 2026-02-27 00:54:13.726759 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-27 00:54:13.726768 | orchestrator | Friday 27 February 2026 00:51:47 +0000 (0:00:01.008) 0:00:13.452 ******* 2026-02-27 00:54:13.726778 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:54:13.726787 | orchestrator | 2026-02-27 00:54:13.726796 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-27 00:54:13.726851 | orchestrator | Friday 27 February 2026 00:51:48 +0000 (0:00:00.804) 0:00:14.256 ******* 2026-02-27 00:54:13.726862 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:54:13.726873 | orchestrator | 2026-02-27 00:54:13.726883 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-02-27 00:54:13.726892 | orchestrator | Friday 27 February 2026 00:51:49 +0000 (0:00:00.938) 0:00:15.195 ******* 2026-02-27 00:54:13.726903 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:54:13.726913 | orchestrator | 2026-02-27 00:54:13.726924 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-02-27 00:54:13.726934 | orchestrator | Friday 27 February 2026 00:51:50 +0000 (0:00:00.486) 0:00:15.681 ******* 2026-02-27 00:54:13.726945 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:54:13.726955 | orchestrator | 2026-02-27 00:54:13.726964 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-02-27 00:54:13.726975 | orchestrator | Friday 27 February 2026 00:51:50 +0000 (0:00:00.481) 0:00:16.163 ******* 2026-02-27 00:54:13.726990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-27 00:54:13.727005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-27 00:54:13.727032 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-27 00:54:13.727044 | orchestrator | 2026-02-27 00:54:13.727054 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-02-27 00:54:13.727066 | orchestrator | Friday 27 February 2026 00:51:51 +0000 (0:00:01.344) 0:00:17.507 ******* 2026-02-27 00:54:13.727087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-27 00:54:13.727100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-27 00:54:13.727140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-27 00:54:13.727152 | orchestrator | 2026-02-27 00:54:13.727161 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-02-27 00:54:13.727170 | orchestrator | Friday 27 February 2026 00:51:55 +0000 (0:00:03.280) 0:00:20.787 ******* 2026-02-27 00:54:13.727179 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-27 00:54:13.727189 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-27 00:54:13.727197 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-27 00:54:13.727206 | orchestrator | 2026-02-27 00:54:13.727220 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-02-27 00:54:13.727230 | orchestrator | Friday 27 February 2026 00:52:00 +0000 (0:00:04.971) 0:00:25.759 ******* 2026-02-27 00:54:13.727238 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-27 00:54:13.727249 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-27 00:54:13.727259 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-27 00:54:13.727270 | orchestrator | 2026-02-27 00:54:13.727286 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-02-27 00:54:13.727296 | orchestrator | Friday 27 February 2026 00:52:02 +0000 (0:00:02.498) 0:00:28.257 ******* 2026-02-27 00:54:13.727305 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-27 00:54:13.727314 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-27 00:54:13.727323 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-27 00:54:13.727334 | orchestrator | 2026-02-27 00:54:13.727343 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-02-27 00:54:13.727353 | orchestrator | Friday 27 February 2026 00:52:04 +0000 (0:00:01.661) 0:00:29.919 ******* 2026-02-27 00:54:13.727362 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-27 00:54:13.727372 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-27 00:54:13.727381 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-27 00:54:13.727390 | orchestrator | 2026-02-27 00:54:13.727400 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-02-27 00:54:13.727425 | orchestrator | Friday 27 February 2026 00:52:06 +0000 (0:00:01.941) 0:00:31.861 ******* 2026-02-27 00:54:13.727434 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-27 00:54:13.727443 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-27 00:54:13.727452 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-27 00:54:13.727461 | orchestrator | 2026-02-27 00:54:13.727470 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-02-27 00:54:13.727479 | orchestrator | Friday 27 February 2026 00:52:07 +0000 (0:00:01.555) 0:00:33.416 ******* 2026-02-27 00:54:13.727489 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-27 00:54:13.727499 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-27 00:54:13.727509 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-27 00:54:13.727518 | orchestrator | 2026-02-27 00:54:13.727529 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-27 00:54:13.727540 | orchestrator | Friday 27 February 2026 00:52:09 +0000 (0:00:01.864) 0:00:35.281 ******* 2026-02-27 00:54:13.727550 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:54:13.727560 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:54:13.727570 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:54:13.727581 | orchestrator | 2026-02-27 00:54:13.727591 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-02-27 00:54:13.727600 | orchestrator | Friday 27 February 2026 00:52:10 +0000 (0:00:00.581) 0:00:35.862 ******* 2026-02-27 00:54:13.727611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-27 00:54:13.727639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-27 00:54:13.727651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-27 00:54:13.727669 | orchestrator | 2026-02-27 00:54:13.727680 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-02-27 00:54:13.727689 | orchestrator | Friday 27 February 2026 00:52:12 +0000 (0:00:01.747) 0:00:37.610 ******* 2026-02-27 00:54:13.727699 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:54:13.727710 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:54:13.727721 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:54:13.727731 | orchestrator | 2026-02-27 00:54:13.727740 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-02-27 00:54:13.727751 | orchestrator | Friday 27 February 2026 00:52:13 +0000 (0:00:00.992) 0:00:38.602 ******* 2026-02-27 00:54:13.727762 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:54:13.727772 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:54:13.727782 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:54:13.727792 | orchestrator | 2026-02-27 00:54:13.727801 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-02-27 00:54:13.727811 | orchestrator | Friday 27 February 2026 00:52:21 +0000 (0:00:08.745) 0:00:47.347 ******* 2026-02-27 00:54:13.727821 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:54:13.727831 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:54:13.727841 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:54:13.727852 | orchestrator | 2026-02-27 00:54:13.727862 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-27 00:54:13.727872 | orchestrator | 2026-02-27 00:54:13.727882 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-27 00:54:13.727892 | orchestrator | Friday 27 February 2026 00:52:22 +0000 (0:00:00.948) 0:00:48.296 ******* 2026-02-27 00:54:13.727903 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:54:13.727913 | orchestrator | 2026-02-27 00:54:13.727924 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-27 00:54:13.727933 | orchestrator | Friday 27 February 2026 00:52:24 +0000 (0:00:01.499) 0:00:49.796 ******* 2026-02-27 00:54:13.727943 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:54:13.727951 | orchestrator | 2026-02-27 00:54:13.727960 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-27 00:54:13.727969 | orchestrator | Friday 27 February 2026 00:52:24 +0000 (0:00:00.530) 0:00:50.326 ******* 2026-02-27 00:54:13.727978 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:54:13.727988 | orchestrator | 2026-02-27 00:54:13.727998 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-27 00:54:13.728007 | orchestrator | Friday 27 February 2026 00:52:26 +0000 (0:00:02.131) 0:00:52.458 ******* 2026-02-27 00:54:13.728018 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:54:13.728029 | orchestrator | 2026-02-27 00:54:13.728039 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-27 00:54:13.728049 | orchestrator | 2026-02-27 00:54:13.728060 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-27 00:54:13.728069 | orchestrator | Friday 27 February 2026 00:53:24 +0000 (0:00:57.563) 0:01:50.022 ******* 2026-02-27 00:54:13.728086 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:54:13.728096 | orchestrator | 2026-02-27 00:54:13.728135 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-27 00:54:13.728145 | orchestrator | Friday 27 February 2026 00:53:25 +0000 (0:00:00.751) 0:01:50.774 ******* 2026-02-27 00:54:13.728155 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:54:13.728165 | orchestrator | 2026-02-27 00:54:13.728174 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-27 00:54:13.728184 | orchestrator | Friday 27 February 2026 00:53:25 +0000 (0:00:00.237) 0:01:51.011 ******* 2026-02-27 00:54:13.728195 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:54:13.728205 | orchestrator | 2026-02-27 00:54:13.728214 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-27 00:54:13.728224 | orchestrator | Friday 27 February 2026 00:53:27 +0000 (0:00:02.165) 0:01:53.176 ******* 2026-02-27 00:54:13.728234 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:54:13.728245 | orchestrator | 2026-02-27 00:54:13.728255 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-27 00:54:13.728266 | orchestrator | 2026-02-27 00:54:13.728277 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-27 00:54:13.728295 | orchestrator | Friday 27 February 2026 00:53:48 +0000 (0:00:20.403) 0:02:13.579 ******* 2026-02-27 00:54:13.728305 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:54:13.728316 | orchestrator | 2026-02-27 00:54:13.728326 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-27 00:54:13.728336 | orchestrator | Friday 27 February 2026 00:53:48 +0000 (0:00:00.643) 0:02:14.223 ******* 2026-02-27 00:54:13.728346 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:54:13.728356 | orchestrator | 2026-02-27 00:54:13.728366 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-27 00:54:13.728376 | orchestrator | Friday 27 February 2026 00:53:48 +0000 (0:00:00.245) 0:02:14.469 ******* 2026-02-27 00:54:13.728460 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:54:13.728486 | orchestrator | 2026-02-27 00:54:13.728495 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-27 00:54:13.728504 | orchestrator | Friday 27 February 2026 00:53:50 +0000 (0:00:01.859) 0:02:16.329 ******* 2026-02-27 00:54:13.728513 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:54:13.728521 | orchestrator | 2026-02-27 00:54:13.728529 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-02-27 00:54:13.728537 | orchestrator | 2026-02-27 00:54:13.728545 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-02-27 00:54:13.728554 | orchestrator | Friday 27 February 2026 00:54:08 +0000 (0:00:17.254) 0:02:33.583 ******* 2026-02-27 00:54:13.728563 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:54:13.728571 | orchestrator | 2026-02-27 00:54:13.728580 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-27 00:54:13.728588 | orchestrator | Friday 27 February 2026 00:54:08 +0000 (0:00:00.580) 0:02:34.164 ******* 2026-02-27 00:54:13.728598 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-27 00:54:13.728607 | orchestrator | enable_outward_rabbitmq_True 2026-02-27 00:54:13.728616 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-27 00:54:13.728625 | orchestrator | outward_rabbitmq_restart 2026-02-27 00:54:13.728635 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:54:13.728644 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:54:13.728653 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:54:13.728662 | orchestrator | 2026-02-27 00:54:13.728670 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-02-27 00:54:13.728679 | orchestrator | skipping: no hosts matched 2026-02-27 00:54:13.728689 | orchestrator | 2026-02-27 00:54:13.728699 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-02-27 00:54:13.728717 | orchestrator | skipping: no hosts matched 2026-02-27 00:54:13.728727 | orchestrator | 2026-02-27 00:54:13.728736 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-02-27 00:54:13.728745 | orchestrator | skipping: no hosts matched 2026-02-27 00:54:13.728753 | orchestrator | 2026-02-27 00:54:13.728763 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 00:54:13.728773 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-02-27 00:54:13.728783 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-27 00:54:13.728793 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-27 00:54:13.728803 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-27 00:54:13.728812 | orchestrator | 2026-02-27 00:54:13.728821 | orchestrator | 2026-02-27 00:54:13.728831 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 00:54:13.728841 | orchestrator | Friday 27 February 2026 00:54:11 +0000 (0:00:03.169) 0:02:37.334 ******* 2026-02-27 00:54:13.728851 | orchestrator | =============================================================================== 2026-02-27 00:54:13.728861 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 95.22s 2026-02-27 00:54:13.728870 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 8.75s 2026-02-27 00:54:13.728880 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 6.16s 2026-02-27 00:54:13.728890 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 4.97s 2026-02-27 00:54:13.728899 | orchestrator | Check RabbitMQ service -------------------------------------------------- 4.06s 2026-02-27 00:54:13.728909 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 3.28s 2026-02-27 00:54:13.728918 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.17s 2026-02-27 00:54:13.728927 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.90s 2026-02-27 00:54:13.728937 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.54s 2026-02-27 00:54:13.728946 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.50s 2026-02-27 00:54:13.728962 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.94s 2026-02-27 00:54:13.728972 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.86s 2026-02-27 00:54:13.728981 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.75s 2026-02-27 00:54:13.728987 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.75s 2026-02-27 00:54:13.728993 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.66s 2026-02-27 00:54:13.729008 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.56s 2026-02-27 00:54:13.729014 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.34s 2026-02-27 00:54:13.729020 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.30s 2026-02-27 00:54:13.729025 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.01s 2026-02-27 00:54:13.729031 | orchestrator | rabbitmq : Catch when RabbitMQ is being downgraded ---------------------- 1.01s 2026-02-27 00:54:13.729171 | orchestrator | 2026-02-27 00:54:13 | INFO  | Task 6b585c0e-0830-41d2-a6a4-15b16541ac8b is in state STARTED 2026-02-27 00:54:13.729186 | orchestrator | 2026-02-27 00:54:13 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:54:16.780564 | orchestrator | 2026-02-27 00:54:16 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:54:16.784577 | orchestrator | 2026-02-27 00:54:16 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:54:16.787183 | orchestrator | 2026-02-27 00:54:16 | INFO  | Task 6b585c0e-0830-41d2-a6a4-15b16541ac8b is in state STARTED 2026-02-27 00:54:16.787248 | orchestrator | 2026-02-27 00:54:16 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:54:19.828210 | orchestrator | 2026-02-27 00:54:19 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:54:19.831985 | orchestrator | 2026-02-27 00:54:19 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:54:19.832784 | orchestrator | 2026-02-27 00:54:19 | INFO  | Task 6b585c0e-0830-41d2-a6a4-15b16541ac8b is in state STARTED 2026-02-27 00:54:19.834695 | orchestrator | 2026-02-27 00:54:19 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:54:22.871252 | orchestrator | 2026-02-27 00:54:22 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:54:22.871972 | orchestrator | 2026-02-27 00:54:22 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:54:22.872265 | orchestrator | 2026-02-27 00:54:22 | INFO  | Task 6b585c0e-0830-41d2-a6a4-15b16541ac8b is in state STARTED 2026-02-27 00:54:22.872301 | orchestrator | 2026-02-27 00:54:22 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:54:25.915416 | orchestrator | 2026-02-27 00:54:25 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:54:25.917077 | orchestrator | 2026-02-27 00:54:25 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:54:25.918494 | orchestrator | 2026-02-27 00:54:25 | INFO  | Task 6b585c0e-0830-41d2-a6a4-15b16541ac8b is in state STARTED 2026-02-27 00:54:25.918570 | orchestrator | 2026-02-27 00:54:25 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:54:28.960463 | orchestrator | 2026-02-27 00:54:28 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:54:28.964265 | orchestrator | 2026-02-27 00:54:28 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:54:28.964620 | orchestrator | 2026-02-27 00:54:28 | INFO  | Task 6b585c0e-0830-41d2-a6a4-15b16541ac8b is in state STARTED 2026-02-27 00:54:28.964654 | orchestrator | 2026-02-27 00:54:28 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:54:32.011570 | orchestrator | 2026-02-27 00:54:32 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:54:32.013606 | orchestrator | 2026-02-27 00:54:32 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:54:32.013674 | orchestrator | 2026-02-27 00:54:32 | INFO  | Task 6b585c0e-0830-41d2-a6a4-15b16541ac8b is in state STARTED 2026-02-27 00:54:32.013688 | orchestrator | 2026-02-27 00:54:32 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:54:35.058677 | orchestrator | 2026-02-27 00:54:35 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:54:35.058778 | orchestrator | 2026-02-27 00:54:35 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:54:35.058814 | orchestrator | 2026-02-27 00:54:35 | INFO  | Task 6b585c0e-0830-41d2-a6a4-15b16541ac8b is in state STARTED 2026-02-27 00:54:35.058827 | orchestrator | 2026-02-27 00:54:35 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:54:38.090287 | orchestrator | 2026-02-27 00:54:38 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:54:38.098774 | orchestrator | 2026-02-27 00:54:38 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:54:38.101402 | orchestrator | 2026-02-27 00:54:38 | INFO  | Task 6b585c0e-0830-41d2-a6a4-15b16541ac8b is in state STARTED 2026-02-27 00:54:38.101439 | orchestrator | 2026-02-27 00:54:38 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:54:41.146277 | orchestrator | 2026-02-27 00:54:41 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:54:41.149269 | orchestrator | 2026-02-27 00:54:41 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:54:41.151841 | orchestrator | 2026-02-27 00:54:41 | INFO  | Task 6b585c0e-0830-41d2-a6a4-15b16541ac8b is in state STARTED 2026-02-27 00:54:41.152028 | orchestrator | 2026-02-27 00:54:41 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:54:44.215197 | orchestrator | 2026-02-27 00:54:44 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:54:44.218202 | orchestrator | 2026-02-27 00:54:44 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:54:44.219835 | orchestrator | 2026-02-27 00:54:44 | INFO  | Task 6b585c0e-0830-41d2-a6a4-15b16541ac8b is in state STARTED 2026-02-27 00:54:44.219896 | orchestrator | 2026-02-27 00:54:44 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:54:47.270255 | orchestrator | 2026-02-27 00:54:47 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:54:47.271058 | orchestrator | 2026-02-27 00:54:47 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:54:47.271922 | orchestrator | 2026-02-27 00:54:47 | INFO  | Task 6b585c0e-0830-41d2-a6a4-15b16541ac8b is in state STARTED 2026-02-27 00:54:47.271977 | orchestrator | 2026-02-27 00:54:47 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:54:50.300754 | orchestrator | 2026-02-27 00:54:50 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:54:50.300838 | orchestrator | 2026-02-27 00:54:50 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:54:50.302923 | orchestrator | 2026-02-27 00:54:50 | INFO  | Task 6b585c0e-0830-41d2-a6a4-15b16541ac8b is in state STARTED 2026-02-27 00:54:50.303359 | orchestrator | 2026-02-27 00:54:50 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:54:53.343315 | orchestrator | 2026-02-27 00:54:53 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:54:53.343455 | orchestrator | 2026-02-27 00:54:53 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:54:53.346555 | orchestrator | 2026-02-27 00:54:53 | INFO  | Task 6b585c0e-0830-41d2-a6a4-15b16541ac8b is in state STARTED 2026-02-27 00:54:53.346613 | orchestrator | 2026-02-27 00:54:53 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:54:56.392848 | orchestrator | 2026-02-27 00:54:56 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:54:56.394446 | orchestrator | 2026-02-27 00:54:56 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:54:56.396003 | orchestrator | 2026-02-27 00:54:56 | INFO  | Task 6b585c0e-0830-41d2-a6a4-15b16541ac8b is in state STARTED 2026-02-27 00:54:56.396037 | orchestrator | 2026-02-27 00:54:56 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:54:59.439195 | orchestrator | 2026-02-27 00:54:59 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:54:59.440060 | orchestrator | 2026-02-27 00:54:59 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:54:59.441636 | orchestrator | 2026-02-27 00:54:59 | INFO  | Task 6b585c0e-0830-41d2-a6a4-15b16541ac8b is in state SUCCESS 2026-02-27 00:54:59.445198 | orchestrator | 2026-02-27 00:54:59 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:54:59.446578 | orchestrator | 2026-02-27 00:54:59.446605 | orchestrator | 2026-02-27 00:54:59.446611 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-27 00:54:59.446617 | orchestrator | 2026-02-27 00:54:59.446623 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-27 00:54:59.446629 | orchestrator | Friday 27 February 2026 00:52:36 +0000 (0:00:00.166) 0:00:00.166 ******* 2026-02-27 00:54:59.446635 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:54:59.446642 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:54:59.446648 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:54:59.446659 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:54:59.446664 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:54:59.446670 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:54:59.446727 | orchestrator | 2026-02-27 00:54:59.446734 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-27 00:54:59.446739 | orchestrator | Friday 27 February 2026 00:52:37 +0000 (0:00:00.766) 0:00:00.932 ******* 2026-02-27 00:54:59.446745 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-02-27 00:54:59.446750 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-02-27 00:54:59.446756 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-02-27 00:54:59.446761 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-02-27 00:54:59.446766 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-02-27 00:54:59.446771 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-02-27 00:54:59.446776 | orchestrator | 2026-02-27 00:54:59.446781 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-02-27 00:54:59.446786 | orchestrator | 2026-02-27 00:54:59.446791 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-02-27 00:54:59.446797 | orchestrator | Friday 27 February 2026 00:52:38 +0000 (0:00:01.472) 0:00:02.405 ******* 2026-02-27 00:54:59.446803 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:54:59.446809 | orchestrator | 2026-02-27 00:54:59.446814 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-02-27 00:54:59.446820 | orchestrator | Friday 27 February 2026 00:52:40 +0000 (0:00:01.801) 0:00:04.206 ******* 2026-02-27 00:54:59.446827 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.446834 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.446839 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.446859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.446864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.446878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.446883 | orchestrator | 2026-02-27 00:54:59.446889 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-02-27 00:54:59.446894 | orchestrator | Friday 27 February 2026 00:52:42 +0000 (0:00:01.529) 0:00:05.736 ******* 2026-02-27 00:54:59.446903 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.446909 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.446914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.446919 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.446924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.446929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.446939 | orchestrator | 2026-02-27 00:54:59.446944 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-02-27 00:54:59.446949 | orchestrator | Friday 27 February 2026 00:52:44 +0000 (0:00:01.940) 0:00:07.676 ******* 2026-02-27 00:54:59.446955 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.446960 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.446969 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.446980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.446985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.446990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.446996 | orchestrator | 2026-02-27 00:54:59.447001 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-02-27 00:54:59.447006 | orchestrator | Friday 27 February 2026 00:52:45 +0000 (0:00:01.627) 0:00:09.303 ******* 2026-02-27 00:54:59.447011 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.447020 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.447025 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.447030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.447036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.447048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.447053 | orchestrator | 2026-02-27 00:54:59.447059 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-02-27 00:54:59.447064 | orchestrator | Friday 27 February 2026 00:52:47 +0000 (0:00:01.716) 0:00:11.019 ******* 2026-02-27 00:54:59.447069 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.447074 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.447079 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.447104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.447109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.447114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.447120 | orchestrator | 2026-02-27 00:54:59.447125 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-02-27 00:54:59.447130 | orchestrator | Friday 27 February 2026 00:52:48 +0000 (0:00:01.522) 0:00:12.541 ******* 2026-02-27 00:54:59.447135 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:54:59.447141 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:54:59.447146 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:54:59.447152 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:54:59.447158 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:54:59.447164 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:54:59.447170 | orchestrator | 2026-02-27 00:54:59.447176 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-02-27 00:54:59.447182 | orchestrator | Friday 27 February 2026 00:52:51 +0000 (0:00:02.598) 0:00:15.140 ******* 2026-02-27 00:54:59.447188 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-02-27 00:54:59.447194 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-02-27 00:54:59.447199 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-02-27 00:54:59.447208 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-02-27 00:54:59.447214 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-02-27 00:54:59.447220 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-02-27 00:54:59.447226 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-27 00:54:59.447235 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-27 00:54:59.447240 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-27 00:54:59.447246 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-27 00:54:59.447252 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-27 00:54:59.447258 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-27 00:54:59.447264 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-27 00:54:59.447275 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-27 00:54:59.447281 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-27 00:54:59.447287 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-27 00:54:59.447293 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-27 00:54:59.447299 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-27 00:54:59.447305 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-27 00:54:59.447312 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-27 00:54:59.447318 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-27 00:54:59.447324 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-27 00:54:59.447330 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-27 00:54:59.447336 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-27 00:54:59.447342 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-27 00:54:59.447347 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-27 00:54:59.447353 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-27 00:54:59.447359 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-27 00:54:59.447365 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-27 00:54:59.447371 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-27 00:54:59.447377 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-27 00:54:59.447383 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-27 00:54:59.447389 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-27 00:54:59.447395 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-27 00:54:59.447401 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-27 00:54:59.447407 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-27 00:54:59.447413 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-27 00:54:59.447419 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-27 00:54:59.447425 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-27 00:54:59.447431 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-27 00:54:59.447439 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-27 00:54:59.447444 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-27 00:54:59.447453 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-02-27 00:54:59.447461 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-02-27 00:54:59.447467 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-02-27 00:54:59.447472 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-02-27 00:54:59.447477 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-02-27 00:54:59.447482 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-02-27 00:54:59.447487 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-27 00:54:59.447492 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-27 00:54:59.447498 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-27 00:54:59.447503 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-27 00:54:59.447508 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-27 00:54:59.447513 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-27 00:54:59.447518 | orchestrator | 2026-02-27 00:54:59.447523 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-27 00:54:59.447528 | orchestrator | Friday 27 February 2026 00:53:13 +0000 (0:00:21.963) 0:00:37.103 ******* 2026-02-27 00:54:59.447533 | orchestrator | 2026-02-27 00:54:59.447538 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-27 00:54:59.447543 | orchestrator | Friday 27 February 2026 00:53:13 +0000 (0:00:00.066) 0:00:37.169 ******* 2026-02-27 00:54:59.447549 | orchestrator | 2026-02-27 00:54:59.447554 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-27 00:54:59.447559 | orchestrator | Friday 27 February 2026 00:53:13 +0000 (0:00:00.062) 0:00:37.232 ******* 2026-02-27 00:54:59.447564 | orchestrator | 2026-02-27 00:54:59.447569 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-27 00:54:59.447574 | orchestrator | Friday 27 February 2026 00:53:13 +0000 (0:00:00.067) 0:00:37.300 ******* 2026-02-27 00:54:59.447579 | orchestrator | 2026-02-27 00:54:59.447584 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-27 00:54:59.447589 | orchestrator | Friday 27 February 2026 00:53:13 +0000 (0:00:00.065) 0:00:37.365 ******* 2026-02-27 00:54:59.447594 | orchestrator | 2026-02-27 00:54:59.447599 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-27 00:54:59.447604 | orchestrator | Friday 27 February 2026 00:53:13 +0000 (0:00:00.064) 0:00:37.430 ******* 2026-02-27 00:54:59.447609 | orchestrator | 2026-02-27 00:54:59.447614 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-02-27 00:54:59.447619 | orchestrator | Friday 27 February 2026 00:53:13 +0000 (0:00:00.067) 0:00:37.498 ******* 2026-02-27 00:54:59.447624 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:54:59.447629 | orchestrator | ok: [testbed-node-5] 2026-02-27 00:54:59.447634 | orchestrator | ok: [testbed-node-3] 2026-02-27 00:54:59.447639 | orchestrator | ok: [testbed-node-4] 2026-02-27 00:54:59.447644 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:54:59.447649 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:54:59.447654 | orchestrator | 2026-02-27 00:54:59.447663 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-02-27 00:54:59.447668 | orchestrator | Friday 27 February 2026 00:53:16 +0000 (0:00:02.289) 0:00:39.787 ******* 2026-02-27 00:54:59.447673 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:54:59.447678 | orchestrator | changed: [testbed-node-3] 2026-02-27 00:54:59.447683 | orchestrator | changed: [testbed-node-5] 2026-02-27 00:54:59.447688 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:54:59.447693 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:54:59.447698 | orchestrator | changed: [testbed-node-4] 2026-02-27 00:54:59.447703 | orchestrator | 2026-02-27 00:54:59.447708 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-02-27 00:54:59.447713 | orchestrator | 2026-02-27 00:54:59.447718 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-27 00:54:59.447723 | orchestrator | Friday 27 February 2026 00:53:45 +0000 (0:00:29.632) 0:01:09.419 ******* 2026-02-27 00:54:59.447728 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:54:59.447734 | orchestrator | 2026-02-27 00:54:59.447739 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-27 00:54:59.447744 | orchestrator | Friday 27 February 2026 00:53:46 +0000 (0:00:00.870) 0:01:10.290 ******* 2026-02-27 00:54:59.447749 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:54:59.447754 | orchestrator | 2026-02-27 00:54:59.447762 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-02-27 00:54:59.447767 | orchestrator | Friday 27 February 2026 00:53:47 +0000 (0:00:00.791) 0:01:11.082 ******* 2026-02-27 00:54:59.447772 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:54:59.447777 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:54:59.447782 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:54:59.447787 | orchestrator | 2026-02-27 00:54:59.447793 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-02-27 00:54:59.447800 | orchestrator | Friday 27 February 2026 00:53:48 +0000 (0:00:01.137) 0:01:12.220 ******* 2026-02-27 00:54:59.447806 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:54:59.447811 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:54:59.447816 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:54:59.447821 | orchestrator | 2026-02-27 00:54:59.447826 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-02-27 00:54:59.447831 | orchestrator | Friday 27 February 2026 00:53:48 +0000 (0:00:00.360) 0:01:12.580 ******* 2026-02-27 00:54:59.447836 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:54:59.447841 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:54:59.447846 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:54:59.447851 | orchestrator | 2026-02-27 00:54:59.447856 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-02-27 00:54:59.447861 | orchestrator | Friday 27 February 2026 00:53:49 +0000 (0:00:00.353) 0:01:12.934 ******* 2026-02-27 00:54:59.447866 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:54:59.447871 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:54:59.447876 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:54:59.447881 | orchestrator | 2026-02-27 00:54:59.447886 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-02-27 00:54:59.447891 | orchestrator | Friday 27 February 2026 00:53:49 +0000 (0:00:00.352) 0:01:13.286 ******* 2026-02-27 00:54:59.447896 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:54:59.447901 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:54:59.447906 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:54:59.447911 | orchestrator | 2026-02-27 00:54:59.447916 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-02-27 00:54:59.447921 | orchestrator | Friday 27 February 2026 00:53:50 +0000 (0:00:00.663) 0:01:13.950 ******* 2026-02-27 00:54:59.447926 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:54:59.447932 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:54:59.447940 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:54:59.447945 | orchestrator | 2026-02-27 00:54:59.447950 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-02-27 00:54:59.447955 | orchestrator | Friday 27 February 2026 00:53:50 +0000 (0:00:00.398) 0:01:14.348 ******* 2026-02-27 00:54:59.447960 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:54:59.447965 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:54:59.447970 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:54:59.447975 | orchestrator | 2026-02-27 00:54:59.447980 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-02-27 00:54:59.447985 | orchestrator | Friday 27 February 2026 00:53:51 +0000 (0:00:00.373) 0:01:14.722 ******* 2026-02-27 00:54:59.447990 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:54:59.447995 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:54:59.448000 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:54:59.448005 | orchestrator | 2026-02-27 00:54:59.448010 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-02-27 00:54:59.448015 | orchestrator | Friday 27 February 2026 00:53:51 +0000 (0:00:00.374) 0:01:15.096 ******* 2026-02-27 00:54:59.448020 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:54:59.448026 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:54:59.448031 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:54:59.448036 | orchestrator | 2026-02-27 00:54:59.448041 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-02-27 00:54:59.448046 | orchestrator | Friday 27 February 2026 00:53:51 +0000 (0:00:00.529) 0:01:15.626 ******* 2026-02-27 00:54:59.448051 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:54:59.448056 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:54:59.448061 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:54:59.448066 | orchestrator | 2026-02-27 00:54:59.448071 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-02-27 00:54:59.448076 | orchestrator | Friday 27 February 2026 00:53:52 +0000 (0:00:00.399) 0:01:16.026 ******* 2026-02-27 00:54:59.448102 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:54:59.448108 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:54:59.448113 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:54:59.448118 | orchestrator | 2026-02-27 00:54:59.448123 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-02-27 00:54:59.448128 | orchestrator | Friday 27 February 2026 00:53:52 +0000 (0:00:00.339) 0:01:16.365 ******* 2026-02-27 00:54:59.448133 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:54:59.448138 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:54:59.448143 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:54:59.448148 | orchestrator | 2026-02-27 00:54:59.448153 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-02-27 00:54:59.448158 | orchestrator | Friday 27 February 2026 00:53:53 +0000 (0:00:00.334) 0:01:16.699 ******* 2026-02-27 00:54:59.448163 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:54:59.448168 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:54:59.448173 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:54:59.448178 | orchestrator | 2026-02-27 00:54:59.448183 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-02-27 00:54:59.448188 | orchestrator | Friday 27 February 2026 00:53:53 +0000 (0:00:00.584) 0:01:17.284 ******* 2026-02-27 00:54:59.448194 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:54:59.448199 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:54:59.448204 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:54:59.448209 | orchestrator | 2026-02-27 00:54:59.448214 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-02-27 00:54:59.448219 | orchestrator | Friday 27 February 2026 00:53:53 +0000 (0:00:00.329) 0:01:17.614 ******* 2026-02-27 00:54:59.448224 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:54:59.448229 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:54:59.448234 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:54:59.448242 | orchestrator | 2026-02-27 00:54:59.448251 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-02-27 00:54:59.448256 | orchestrator | Friday 27 February 2026 00:53:54 +0000 (0:00:00.434) 0:01:18.048 ******* 2026-02-27 00:54:59.448261 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:54:59.448266 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:54:59.448271 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:54:59.448276 | orchestrator | 2026-02-27 00:54:59.448281 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-02-27 00:54:59.448289 | orchestrator | Friday 27 February 2026 00:53:54 +0000 (0:00:00.366) 0:01:18.415 ******* 2026-02-27 00:54:59.448295 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:54:59.448300 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:54:59.448305 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:54:59.448310 | orchestrator | 2026-02-27 00:54:59.448315 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-27 00:54:59.448320 | orchestrator | Friday 27 February 2026 00:53:55 +0000 (0:00:00.309) 0:01:18.725 ******* 2026-02-27 00:54:59.448325 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:54:59.448330 | orchestrator | 2026-02-27 00:54:59.448335 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-02-27 00:54:59.448340 | orchestrator | Friday 27 February 2026 00:53:55 +0000 (0:00:00.875) 0:01:19.600 ******* 2026-02-27 00:54:59.448345 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:54:59.448350 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:54:59.448355 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:54:59.448360 | orchestrator | 2026-02-27 00:54:59.448366 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-02-27 00:54:59.448371 | orchestrator | Friday 27 February 2026 00:53:56 +0000 (0:00:00.444) 0:01:20.044 ******* 2026-02-27 00:54:59.448376 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:54:59.448381 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:54:59.448386 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:54:59.448391 | orchestrator | 2026-02-27 00:54:59.448396 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-02-27 00:54:59.448401 | orchestrator | Friday 27 February 2026 00:53:56 +0000 (0:00:00.482) 0:01:20.526 ******* 2026-02-27 00:54:59.448406 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:54:59.448411 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:54:59.448416 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:54:59.448421 | orchestrator | 2026-02-27 00:54:59.448426 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-02-27 00:54:59.448431 | orchestrator | Friday 27 February 2026 00:53:57 +0000 (0:00:00.646) 0:01:21.173 ******* 2026-02-27 00:54:59.448436 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:54:59.448442 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:54:59.448447 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:54:59.448452 | orchestrator | 2026-02-27 00:54:59.448457 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-02-27 00:54:59.448462 | orchestrator | Friday 27 February 2026 00:53:57 +0000 (0:00:00.364) 0:01:21.537 ******* 2026-02-27 00:54:59.448467 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:54:59.448472 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:54:59.448477 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:54:59.448482 | orchestrator | 2026-02-27 00:54:59.448487 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-02-27 00:54:59.448492 | orchestrator | Friday 27 February 2026 00:53:58 +0000 (0:00:00.356) 0:01:21.894 ******* 2026-02-27 00:54:59.448497 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:54:59.448502 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:54:59.448507 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:54:59.448512 | orchestrator | 2026-02-27 00:54:59.448518 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-02-27 00:54:59.448526 | orchestrator | Friday 27 February 2026 00:53:58 +0000 (0:00:00.317) 0:01:22.212 ******* 2026-02-27 00:54:59.448531 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:54:59.448536 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:54:59.448541 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:54:59.448546 | orchestrator | 2026-02-27 00:54:59.448551 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-02-27 00:54:59.448556 | orchestrator | Friday 27 February 2026 00:53:59 +0000 (0:00:00.574) 0:01:22.786 ******* 2026-02-27 00:54:59.448562 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:54:59.448567 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:54:59.448572 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:54:59.448577 | orchestrator | 2026-02-27 00:54:59.448582 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-27 00:54:59.448587 | orchestrator | Friday 27 February 2026 00:53:59 +0000 (0:00:00.350) 0:01:23.137 ******* 2026-02-27 00:54:59.448592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.448599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.448723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.448734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.448742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.448747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.448753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.448764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.448769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.448774 | orchestrator | 2026-02-27 00:54:59.448779 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-27 00:54:59.448784 | orchestrator | Friday 27 February 2026 00:54:01 +0000 (0:00:01.597) 0:01:24.735 ******* 2026-02-27 00:54:59.448790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.448795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.448800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.448811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.448817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.448822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.448827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.448836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.448841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.448846 | orchestrator | 2026-02-27 00:54:59.448851 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-02-27 00:54:59.448856 | orchestrator | Friday 27 February 2026 00:54:05 +0000 (0:00:04.395) 0:01:29.130 ******* 2026-02-27 00:54:59.448862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.448867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.448872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.448881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.448889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.448895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.448900 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.448909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.448914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.448919 | orchestrator | 2026-02-27 00:54:59.448924 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-27 00:54:59.448929 | orchestrator | Friday 27 February 2026 00:54:07 +0000 (0:00:02.466) 0:01:31.597 ******* 2026-02-27 00:54:59.448934 | orchestrator | 2026-02-27 00:54:59.448939 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-27 00:54:59.448945 | orchestrator | Friday 27 February 2026 00:54:08 +0000 (0:00:00.066) 0:01:31.663 ******* 2026-02-27 00:54:59.448950 | orchestrator | 2026-02-27 00:54:59.448955 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-27 00:54:59.448960 | orchestrator | Friday 27 February 2026 00:54:08 +0000 (0:00:00.065) 0:01:31.729 ******* 2026-02-27 00:54:59.448965 | orchestrator | 2026-02-27 00:54:59.448970 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-27 00:54:59.448975 | orchestrator | Friday 27 February 2026 00:54:08 +0000 (0:00:00.069) 0:01:31.798 ******* 2026-02-27 00:54:59.448980 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:54:59.448985 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:54:59.448990 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:54:59.448995 | orchestrator | 2026-02-27 00:54:59.449002 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-27 00:54:59.449010 | orchestrator | Friday 27 February 2026 00:54:10 +0000 (0:00:02.800) 0:01:34.599 ******* 2026-02-27 00:54:59.449017 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:54:59.449025 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:54:59.449033 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:54:59.449040 | orchestrator | 2026-02-27 00:54:59.449048 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-27 00:54:59.449055 | orchestrator | Friday 27 February 2026 00:54:14 +0000 (0:00:03.154) 0:01:37.753 ******* 2026-02-27 00:54:59.449063 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:54:59.449071 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:54:59.449079 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:54:59.449203 | orchestrator | 2026-02-27 00:54:59.449209 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-27 00:54:59.449214 | orchestrator | Friday 27 February 2026 00:54:17 +0000 (0:00:03.252) 0:01:41.006 ******* 2026-02-27 00:54:59.449219 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:54:59.449225 | orchestrator | 2026-02-27 00:54:59.449230 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-27 00:54:59.449235 | orchestrator | Friday 27 February 2026 00:54:17 +0000 (0:00:00.131) 0:01:41.137 ******* 2026-02-27 00:54:59.449241 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:54:59.449246 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:54:59.449251 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:54:59.449256 | orchestrator | 2026-02-27 00:54:59.449268 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-27 00:54:59.449280 | orchestrator | Friday 27 February 2026 00:54:18 +0000 (0:00:00.777) 0:01:41.915 ******* 2026-02-27 00:54:59.449285 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:54:59.449290 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:54:59.449295 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:54:59.449300 | orchestrator | 2026-02-27 00:54:59.449305 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-27 00:54:59.449314 | orchestrator | Friday 27 February 2026 00:54:19 +0000 (0:00:00.791) 0:01:42.706 ******* 2026-02-27 00:54:59.449319 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:54:59.449325 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:54:59.449330 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:54:59.449335 | orchestrator | 2026-02-27 00:54:59.449339 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-27 00:54:59.449344 | orchestrator | Friday 27 February 2026 00:54:19 +0000 (0:00:00.848) 0:01:43.555 ******* 2026-02-27 00:54:59.449349 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:54:59.449354 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:54:59.449359 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:54:59.449365 | orchestrator | 2026-02-27 00:54:59.449370 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-27 00:54:59.449376 | orchestrator | Friday 27 February 2026 00:54:20 +0000 (0:00:00.919) 0:01:44.474 ******* 2026-02-27 00:54:59.449381 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:54:59.449387 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:54:59.449392 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:54:59.449398 | orchestrator | 2026-02-27 00:54:59.449404 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-27 00:54:59.449409 | orchestrator | Friday 27 February 2026 00:54:21 +0000 (0:00:00.893) 0:01:45.368 ******* 2026-02-27 00:54:59.449414 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:54:59.449420 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:54:59.449425 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:54:59.449431 | orchestrator | 2026-02-27 00:54:59.449437 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-02-27 00:54:59.449442 | orchestrator | Friday 27 February 2026 00:54:22 +0000 (0:00:00.800) 0:01:46.168 ******* 2026-02-27 00:54:59.449447 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:54:59.449451 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:54:59.449456 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:54:59.449461 | orchestrator | 2026-02-27 00:54:59.449465 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-27 00:54:59.449470 | orchestrator | Friday 27 February 2026 00:54:22 +0000 (0:00:00.311) 0:01:46.479 ******* 2026-02-27 00:54:59.449475 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.449481 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.449486 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.449491 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.449501 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.449506 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.449513 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.449519 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.449524 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.449529 | orchestrator | 2026-02-27 00:54:59.449534 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-27 00:54:59.449538 | orchestrator | Friday 27 February 2026 00:54:24 +0000 (0:00:01.757) 0:01:48.236 ******* 2026-02-27 00:54:59.449565 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.449571 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.449575 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.449581 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.449589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.449594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.449603 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.449610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.449615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.449620 | orchestrator | 2026-02-27 00:54:59.449625 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-02-27 00:54:59.449630 | orchestrator | Friday 27 February 2026 00:54:29 +0000 (0:00:04.480) 0:01:52.717 ******* 2026-02-27 00:54:59.449635 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.449640 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.449645 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.449653 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.449658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.449663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.449668 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.449676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.449684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 00:54:59.449689 | orchestrator | 2026-02-27 00:54:59.449694 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-27 00:54:59.449699 | orchestrator | Friday 27 February 2026 00:54:32 +0000 (0:00:03.081) 0:01:55.799 ******* 2026-02-27 00:54:59.449703 | orchestrator | 2026-02-27 00:54:59.449708 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-27 00:54:59.449713 | orchestrator | Friday 27 February 2026 00:54:32 +0000 (0:00:00.066) 0:01:55.865 ******* 2026-02-27 00:54:59.449718 | orchestrator | 2026-02-27 00:54:59.449722 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-27 00:54:59.449727 | orchestrator | Friday 27 February 2026 00:54:32 +0000 (0:00:00.070) 0:01:55.936 ******* 2026-02-27 00:54:59.449732 | orchestrator | 2026-02-27 00:54:59.449737 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-27 00:54:59.449742 | orchestrator | Friday 27 February 2026 00:54:32 +0000 (0:00:00.065) 0:01:56.001 ******* 2026-02-27 00:54:59.449746 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:54:59.449751 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:54:59.449756 | orchestrator | 2026-02-27 00:54:59.449761 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-27 00:54:59.449766 | orchestrator | Friday 27 February 2026 00:54:38 +0000 (0:00:06.415) 0:02:02.417 ******* 2026-02-27 00:54:59.449773 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:54:59.449778 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:54:59.449783 | orchestrator | 2026-02-27 00:54:59.449788 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-27 00:54:59.449793 | orchestrator | Friday 27 February 2026 00:54:45 +0000 (0:00:06.407) 0:02:08.824 ******* 2026-02-27 00:54:59.449797 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:54:59.449802 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:54:59.449807 | orchestrator | 2026-02-27 00:54:59.449811 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-27 00:54:59.449816 | orchestrator | Friday 27 February 2026 00:54:51 +0000 (0:00:06.435) 0:02:15.259 ******* 2026-02-27 00:54:59.449821 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:54:59.449826 | orchestrator | 2026-02-27 00:54:59.449831 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-27 00:54:59.449835 | orchestrator | Friday 27 February 2026 00:54:51 +0000 (0:00:00.139) 0:02:15.399 ******* 2026-02-27 00:54:59.449840 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:54:59.449845 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:54:59.449850 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:54:59.449855 | orchestrator | 2026-02-27 00:54:59.449859 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-27 00:54:59.449864 | orchestrator | Friday 27 February 2026 00:54:52 +0000 (0:00:00.903) 0:02:16.302 ******* 2026-02-27 00:54:59.449869 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:54:59.449874 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:54:59.449879 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:54:59.449883 | orchestrator | 2026-02-27 00:54:59.449888 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-27 00:54:59.449893 | orchestrator | Friday 27 February 2026 00:54:53 +0000 (0:00:00.684) 0:02:16.987 ******* 2026-02-27 00:54:59.449897 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:54:59.449902 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:54:59.449907 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:54:59.449912 | orchestrator | 2026-02-27 00:54:59.449916 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-27 00:54:59.449921 | orchestrator | Friday 27 February 2026 00:54:54 +0000 (0:00:00.825) 0:02:17.812 ******* 2026-02-27 00:54:59.449926 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:54:59.449931 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:54:59.449936 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:54:59.449940 | orchestrator | 2026-02-27 00:54:59.449945 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-27 00:54:59.449950 | orchestrator | Friday 27 February 2026 00:54:54 +0000 (0:00:00.687) 0:02:18.500 ******* 2026-02-27 00:54:59.449954 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:54:59.449959 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:54:59.449964 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:54:59.449969 | orchestrator | 2026-02-27 00:54:59.449974 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-27 00:54:59.449978 | orchestrator | Friday 27 February 2026 00:54:55 +0000 (0:00:00.850) 0:02:19.351 ******* 2026-02-27 00:54:59.449983 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:54:59.449988 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:54:59.449993 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:54:59.449997 | orchestrator | 2026-02-27 00:54:59.450002 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 00:54:59.450007 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-27 00:54:59.450046 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-02-27 00:54:59.450056 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-02-27 00:54:59.450065 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 00:54:59.450075 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 00:54:59.450092 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 00:54:59.450098 | orchestrator | 2026-02-27 00:54:59.450102 | orchestrator | 2026-02-27 00:54:59.450107 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 00:54:59.450112 | orchestrator | Friday 27 February 2026 00:54:56 +0000 (0:00:01.050) 0:02:20.401 ******* 2026-02-27 00:54:59.450117 | orchestrator | =============================================================================== 2026-02-27 00:54:59.450122 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 29.63s 2026-02-27 00:54:59.450126 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 21.96s 2026-02-27 00:54:59.450131 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 9.69s 2026-02-27 00:54:59.450136 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 9.56s 2026-02-27 00:54:59.450141 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 9.22s 2026-02-27 00:54:59.450145 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.48s 2026-02-27 00:54:59.450150 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.40s 2026-02-27 00:54:59.450155 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.08s 2026-02-27 00:54:59.450160 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.60s 2026-02-27 00:54:59.450164 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.47s 2026-02-27 00:54:59.450169 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.29s 2026-02-27 00:54:59.450174 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.94s 2026-02-27 00:54:59.450179 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.80s 2026-02-27 00:54:59.450183 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.76s 2026-02-27 00:54:59.450188 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.72s 2026-02-27 00:54:59.450193 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.63s 2026-02-27 00:54:59.450198 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.60s 2026-02-27 00:54:59.450203 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.53s 2026-02-27 00:54:59.450207 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.52s 2026-02-27 00:54:59.450212 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.47s 2026-02-27 00:55:02.543604 | orchestrator | 2026-02-27 00:55:02 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:55:02.544361 | orchestrator | 2026-02-27 00:55:02 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:55:02.544835 | orchestrator | 2026-02-27 00:55:02 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:55:05.586189 | orchestrator | 2026-02-27 00:55:05 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:55:05.586880 | orchestrator | 2026-02-27 00:55:05 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:55:05.586974 | orchestrator | 2026-02-27 00:55:05 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:55:08.633996 | orchestrator | 2026-02-27 00:55:08 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:55:08.636955 | orchestrator | 2026-02-27 00:55:08 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:55:08.637018 | orchestrator | 2026-02-27 00:55:08 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:55:11.683627 | orchestrator | 2026-02-27 00:55:11 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:55:11.684982 | orchestrator | 2026-02-27 00:55:11 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:55:11.685313 | orchestrator | 2026-02-27 00:55:11 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:55:14.732666 | orchestrator | 2026-02-27 00:55:14 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:55:14.733928 | orchestrator | 2026-02-27 00:55:14 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:55:14.733987 | orchestrator | 2026-02-27 00:55:14 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:55:17.784343 | orchestrator | 2026-02-27 00:55:17 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:55:17.786830 | orchestrator | 2026-02-27 00:55:17 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:55:17.787805 | orchestrator | 2026-02-27 00:55:17 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:55:20.849250 | orchestrator | 2026-02-27 00:55:20 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:55:20.849349 | orchestrator | 2026-02-27 00:55:20 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:55:20.849363 | orchestrator | 2026-02-27 00:55:20 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:55:23.915456 | orchestrator | 2026-02-27 00:55:23 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:55:23.916673 | orchestrator | 2026-02-27 00:55:23 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:55:23.916725 | orchestrator | 2026-02-27 00:55:23 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:55:26.953638 | orchestrator | 2026-02-27 00:55:26 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:55:26.954577 | orchestrator | 2026-02-27 00:55:26 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:55:26.954702 | orchestrator | 2026-02-27 00:55:26 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:55:30.007147 | orchestrator | 2026-02-27 00:55:30 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:55:30.011112 | orchestrator | 2026-02-27 00:55:30 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:55:30.011173 | orchestrator | 2026-02-27 00:55:30 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:55:33.059906 | orchestrator | 2026-02-27 00:55:33 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:55:33.062106 | orchestrator | 2026-02-27 00:55:33 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:55:33.062164 | orchestrator | 2026-02-27 00:55:33 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:55:36.110796 | orchestrator | 2026-02-27 00:55:36 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:55:36.111567 | orchestrator | 2026-02-27 00:55:36 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:55:36.111602 | orchestrator | 2026-02-27 00:55:36 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:55:39.157332 | orchestrator | 2026-02-27 00:55:39 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:55:39.159270 | orchestrator | 2026-02-27 00:55:39 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:55:39.159336 | orchestrator | 2026-02-27 00:55:39 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:55:42.205363 | orchestrator | 2026-02-27 00:55:42 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:55:42.206248 | orchestrator | 2026-02-27 00:55:42 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:55:42.206351 | orchestrator | 2026-02-27 00:55:42 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:55:45.262519 | orchestrator | 2026-02-27 00:55:45 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:55:45.264670 | orchestrator | 2026-02-27 00:55:45 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:55:45.264867 | orchestrator | 2026-02-27 00:55:45 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:55:48.322901 | orchestrator | 2026-02-27 00:55:48 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:55:48.325123 | orchestrator | 2026-02-27 00:55:48 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:55:48.325700 | orchestrator | 2026-02-27 00:55:48 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:55:51.375604 | orchestrator | 2026-02-27 00:55:51 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:55:51.376537 | orchestrator | 2026-02-27 00:55:51 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:55:51.376856 | orchestrator | 2026-02-27 00:55:51 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:55:54.434402 | orchestrator | 2026-02-27 00:55:54 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:55:54.434996 | orchestrator | 2026-02-27 00:55:54 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:55:54.435020 | orchestrator | 2026-02-27 00:55:54 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:55:57.476504 | orchestrator | 2026-02-27 00:55:57 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:55:57.476853 | orchestrator | 2026-02-27 00:55:57 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:55:57.476893 | orchestrator | 2026-02-27 00:55:57 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:56:00.514011 | orchestrator | 2026-02-27 00:56:00 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:56:00.514985 | orchestrator | 2026-02-27 00:56:00 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:56:00.515653 | orchestrator | 2026-02-27 00:56:00 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:56:03.566416 | orchestrator | 2026-02-27 00:56:03 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:56:03.566741 | orchestrator | 2026-02-27 00:56:03 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:56:03.566881 | orchestrator | 2026-02-27 00:56:03 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:56:06.612481 | orchestrator | 2026-02-27 00:56:06 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:56:06.612613 | orchestrator | 2026-02-27 00:56:06 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:56:06.612668 | orchestrator | 2026-02-27 00:56:06 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:56:09.652739 | orchestrator | 2026-02-27 00:56:09 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:56:09.653226 | orchestrator | 2026-02-27 00:56:09 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:56:09.653261 | orchestrator | 2026-02-27 00:56:09 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:56:12.692542 | orchestrator | 2026-02-27 00:56:12 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:56:12.692828 | orchestrator | 2026-02-27 00:56:12 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:56:12.693528 | orchestrator | 2026-02-27 00:56:12 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:56:15.738965 | orchestrator | 2026-02-27 00:56:15 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:56:15.739211 | orchestrator | 2026-02-27 00:56:15 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:56:15.739232 | orchestrator | 2026-02-27 00:56:15 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:56:18.775731 | orchestrator | 2026-02-27 00:56:18 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:56:18.777365 | orchestrator | 2026-02-27 00:56:18 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:56:18.777510 | orchestrator | 2026-02-27 00:56:18 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:56:21.830622 | orchestrator | 2026-02-27 00:56:21 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:56:21.832069 | orchestrator | 2026-02-27 00:56:21 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:56:21.832149 | orchestrator | 2026-02-27 00:56:21 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:56:24.875679 | orchestrator | 2026-02-27 00:56:24 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:56:24.876630 | orchestrator | 2026-02-27 00:56:24 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:56:24.876678 | orchestrator | 2026-02-27 00:56:24 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:56:27.917021 | orchestrator | 2026-02-27 00:56:27 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:56:27.919667 | orchestrator | 2026-02-27 00:56:27 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:56:27.920330 | orchestrator | 2026-02-27 00:56:27 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:56:30.963429 | orchestrator | 2026-02-27 00:56:30 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:56:30.965586 | orchestrator | 2026-02-27 00:56:30 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:56:30.965650 | orchestrator | 2026-02-27 00:56:30 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:56:34.085627 | orchestrator | 2026-02-27 00:56:34 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:56:34.086875 | orchestrator | 2026-02-27 00:56:34 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:56:34.086958 | orchestrator | 2026-02-27 00:56:34 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:56:37.132817 | orchestrator | 2026-02-27 00:56:37 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:56:37.133194 | orchestrator | 2026-02-27 00:56:37 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:56:37.133224 | orchestrator | 2026-02-27 00:56:37 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:56:40.189663 | orchestrator | 2026-02-27 00:56:40 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:56:40.189761 | orchestrator | 2026-02-27 00:56:40 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:56:40.189775 | orchestrator | 2026-02-27 00:56:40 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:56:43.233540 | orchestrator | 2026-02-27 00:56:43 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:56:43.234755 | orchestrator | 2026-02-27 00:56:43 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:56:43.234790 | orchestrator | 2026-02-27 00:56:43 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:56:46.279194 | orchestrator | 2026-02-27 00:56:46 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:56:46.281501 | orchestrator | 2026-02-27 00:56:46 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:56:46.281636 | orchestrator | 2026-02-27 00:56:46 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:56:49.332789 | orchestrator | 2026-02-27 00:56:49 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:56:49.336213 | orchestrator | 2026-02-27 00:56:49 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:56:49.336289 | orchestrator | 2026-02-27 00:56:49 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:56:52.370439 | orchestrator | 2026-02-27 00:56:52 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:56:52.371695 | orchestrator | 2026-02-27 00:56:52 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:56:52.372184 | orchestrator | 2026-02-27 00:56:52 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:56:55.421567 | orchestrator | 2026-02-27 00:56:55 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:56:55.423654 | orchestrator | 2026-02-27 00:56:55 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:56:55.423692 | orchestrator | 2026-02-27 00:56:55 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:56:58.476049 | orchestrator | 2026-02-27 00:56:58 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:56:58.477394 | orchestrator | 2026-02-27 00:56:58 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:56:58.477797 | orchestrator | 2026-02-27 00:56:58 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:57:01.525044 | orchestrator | 2026-02-27 00:57:01 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:57:01.526380 | orchestrator | 2026-02-27 00:57:01 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:57:01.526477 | orchestrator | 2026-02-27 00:57:01 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:57:04.568160 | orchestrator | 2026-02-27 00:57:04 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:57:04.568982 | orchestrator | 2026-02-27 00:57:04 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:57:04.570243 | orchestrator | 2026-02-27 00:57:04 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:57:07.614667 | orchestrator | 2026-02-27 00:57:07 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:57:07.616362 | orchestrator | 2026-02-27 00:57:07 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:57:07.616418 | orchestrator | 2026-02-27 00:57:07 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:57:10.668845 | orchestrator | 2026-02-27 00:57:10 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:57:10.671730 | orchestrator | 2026-02-27 00:57:10 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:57:10.671796 | orchestrator | 2026-02-27 00:57:10 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:57:13.732028 | orchestrator | 2026-02-27 00:57:13 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:57:13.734716 | orchestrator | 2026-02-27 00:57:13 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:57:13.734778 | orchestrator | 2026-02-27 00:57:13 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:57:16.779316 | orchestrator | 2026-02-27 00:57:16 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:57:16.780884 | orchestrator | 2026-02-27 00:57:16 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:57:16.780938 | orchestrator | 2026-02-27 00:57:16 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:57:19.837417 | orchestrator | 2026-02-27 00:57:19 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:57:19.844119 | orchestrator | 2026-02-27 00:57:19 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:57:19.844191 | orchestrator | 2026-02-27 00:57:19 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:57:22.886268 | orchestrator | 2026-02-27 00:57:22 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:57:22.887392 | orchestrator | 2026-02-27 00:57:22 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:57:22.887457 | orchestrator | 2026-02-27 00:57:22 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:57:25.925551 | orchestrator | 2026-02-27 00:57:25 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:57:25.927766 | orchestrator | 2026-02-27 00:57:25 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:57:25.927847 | orchestrator | 2026-02-27 00:57:25 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:57:28.996623 | orchestrator | 2026-02-27 00:57:28 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:57:28.999491 | orchestrator | 2026-02-27 00:57:28 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:57:28.999567 | orchestrator | 2026-02-27 00:57:29 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:57:32.045119 | orchestrator | 2026-02-27 00:57:32 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:57:32.045546 | orchestrator | 2026-02-27 00:57:32 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:57:32.045770 | orchestrator | 2026-02-27 00:57:32 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:57:35.086781 | orchestrator | 2026-02-27 00:57:35 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:57:35.088402 | orchestrator | 2026-02-27 00:57:35 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:57:35.088504 | orchestrator | 2026-02-27 00:57:35 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:57:38.136154 | orchestrator | 2026-02-27 00:57:38 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:57:38.137963 | orchestrator | 2026-02-27 00:57:38 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:57:38.138007 | orchestrator | 2026-02-27 00:57:38 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:57:41.176017 | orchestrator | 2026-02-27 00:57:41 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:57:41.176147 | orchestrator | 2026-02-27 00:57:41 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:57:41.176166 | orchestrator | 2026-02-27 00:57:41 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:57:44.220685 | orchestrator | 2026-02-27 00:57:44 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:57:44.222438 | orchestrator | 2026-02-27 00:57:44 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:57:44.222477 | orchestrator | 2026-02-27 00:57:44 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:57:47.272144 | orchestrator | 2026-02-27 00:57:47 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:57:47.274634 | orchestrator | 2026-02-27 00:57:47 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:57:47.274811 | orchestrator | 2026-02-27 00:57:47 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:57:50.312427 | orchestrator | 2026-02-27 00:57:50 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:57:50.314896 | orchestrator | 2026-02-27 00:57:50 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:57:50.314965 | orchestrator | 2026-02-27 00:57:50 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:57:53.371601 | orchestrator | 2026-02-27 00:57:53 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:57:53.372830 | orchestrator | 2026-02-27 00:57:53 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:57:53.373030 | orchestrator | 2026-02-27 00:57:53 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:57:56.444308 | orchestrator | 2026-02-27 00:57:56 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:57:56.449562 | orchestrator | 2026-02-27 00:57:56 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:57:56.449682 | orchestrator | 2026-02-27 00:57:56 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:57:59.502547 | orchestrator | 2026-02-27 00:57:59 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:57:59.503817 | orchestrator | 2026-02-27 00:57:59 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:57:59.503898 | orchestrator | 2026-02-27 00:57:59 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:58:02.545616 | orchestrator | 2026-02-27 00:58:02 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:58:02.545700 | orchestrator | 2026-02-27 00:58:02 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:58:02.545714 | orchestrator | 2026-02-27 00:58:02 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:58:05.606251 | orchestrator | 2026-02-27 00:58:05 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:58:05.607613 | orchestrator | 2026-02-27 00:58:05 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:58:05.607704 | orchestrator | 2026-02-27 00:58:05 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:58:08.644571 | orchestrator | 2026-02-27 00:58:08 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:58:08.645009 | orchestrator | 2026-02-27 00:58:08 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:58:08.645043 | orchestrator | 2026-02-27 00:58:08 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:58:11.696542 | orchestrator | 2026-02-27 00:58:11 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state STARTED 2026-02-27 00:58:11.699693 | orchestrator | 2026-02-27 00:58:11 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:58:11.699757 | orchestrator | 2026-02-27 00:58:11 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:58:14.757880 | orchestrator | 2026-02-27 00:58:14 | INFO  | Task df088211-dd3c-43b8-b652-91f67717ebda is in state SUCCESS 2026-02-27 00:58:14.760894 | orchestrator | 2026-02-27 00:58:14.760948 | orchestrator | 2026-02-27 00:58:14.760964 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-27 00:58:14.760977 | orchestrator | 2026-02-27 00:58:14.761086 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-27 00:58:14.761102 | orchestrator | Friday 27 February 2026 00:51:13 +0000 (0:00:00.338) 0:00:00.338 ******* 2026-02-27 00:58:14.761115 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:58:14.761129 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:58:14.761142 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:58:14.761155 | orchestrator | 2026-02-27 00:58:14.761166 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-27 00:58:14.761179 | orchestrator | Friday 27 February 2026 00:51:14 +0000 (0:00:00.565) 0:00:00.904 ******* 2026-02-27 00:58:14.761194 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-02-27 00:58:14.761274 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-02-27 00:58:14.761288 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-02-27 00:58:14.761301 | orchestrator | 2026-02-27 00:58:14.761315 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-02-27 00:58:14.761328 | orchestrator | 2026-02-27 00:58:14.761341 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-27 00:58:14.761354 | orchestrator | Friday 27 February 2026 00:51:14 +0000 (0:00:00.599) 0:00:01.504 ******* 2026-02-27 00:58:14.761363 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:58:14.761371 | orchestrator | 2026-02-27 00:58:14.761379 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-02-27 00:58:14.761396 | orchestrator | Friday 27 February 2026 00:51:15 +0000 (0:00:00.907) 0:00:02.412 ******* 2026-02-27 00:58:14.761404 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:58:14.761413 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:58:14.761422 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:58:14.761429 | orchestrator | 2026-02-27 00:58:14.761437 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-27 00:58:14.761446 | orchestrator | Friday 27 February 2026 00:51:16 +0000 (0:00:00.706) 0:00:03.118 ******* 2026-02-27 00:58:14.761453 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:58:14.761461 | orchestrator | 2026-02-27 00:58:14.761469 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-02-27 00:58:14.761478 | orchestrator | Friday 27 February 2026 00:51:17 +0000 (0:00:01.203) 0:00:04.321 ******* 2026-02-27 00:58:14.761487 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:58:14.761516 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:58:14.761529 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:58:14.761541 | orchestrator | 2026-02-27 00:58:14.761554 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-02-27 00:58:14.761567 | orchestrator | Friday 27 February 2026 00:51:18 +0000 (0:00:00.864) 0:00:05.186 ******* 2026-02-27 00:58:14.761581 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-27 00:58:14.761596 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-27 00:58:14.761610 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-27 00:58:14.761624 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-27 00:58:14.761638 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-27 00:58:14.761653 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-27 00:58:14.761666 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-27 00:58:14.761681 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-27 00:58:14.761693 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-27 00:58:14.761707 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-27 00:58:14.761721 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-27 00:58:14.761734 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-27 00:58:14.761746 | orchestrator | 2026-02-27 00:58:14.761760 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-27 00:58:14.761769 | orchestrator | Friday 27 February 2026 00:51:20 +0000 (0:00:02.495) 0:00:07.682 ******* 2026-02-27 00:58:14.761776 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-02-27 00:58:14.761806 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-02-27 00:58:14.761815 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-02-27 00:58:14.761823 | orchestrator | 2026-02-27 00:58:14.761831 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-27 00:58:14.761839 | orchestrator | Friday 27 February 2026 00:51:22 +0000 (0:00:01.137) 0:00:08.819 ******* 2026-02-27 00:58:14.761847 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-02-27 00:58:14.761855 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-02-27 00:58:14.761863 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-02-27 00:58:14.761871 | orchestrator | 2026-02-27 00:58:14.761879 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-27 00:58:14.761887 | orchestrator | Friday 27 February 2026 00:51:23 +0000 (0:00:01.693) 0:00:10.513 ******* 2026-02-27 00:58:14.761895 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-02-27 00:58:14.761903 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.761926 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-02-27 00:58:14.761934 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.761942 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-02-27 00:58:14.761950 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.761958 | orchestrator | 2026-02-27 00:58:14.761966 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-02-27 00:58:14.761973 | orchestrator | Friday 27 February 2026 00:51:25 +0000 (0:00:01.544) 0:00:12.057 ******* 2026-02-27 00:58:14.761984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-27 00:58:14.762212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-27 00:58:14.762236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-27 00:58:14.762251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-27 00:58:14.762266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-27 00:58:14.762292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-27 00:58:14.762307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-27 00:58:14.762333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-27 00:58:14.762354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-27 00:58:14.762368 | orchestrator | 2026-02-27 00:58:14.762382 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-02-27 00:58:14.762396 | orchestrator | Friday 27 February 2026 00:51:29 +0000 (0:00:04.091) 0:00:16.149 ******* 2026-02-27 00:58:14.762412 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:58:14.762426 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:58:14.762441 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:58:14.762454 | orchestrator | 2026-02-27 00:58:14.762466 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-02-27 00:58:14.762480 | orchestrator | Friday 27 February 2026 00:51:31 +0000 (0:00:01.845) 0:00:17.994 ******* 2026-02-27 00:58:14.762492 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-02-27 00:58:14.762505 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-02-27 00:58:14.762517 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-02-27 00:58:14.762531 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-02-27 00:58:14.762543 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-02-27 00:58:14.762556 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-02-27 00:58:14.762568 | orchestrator | 2026-02-27 00:58:14.762580 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-02-27 00:58:14.762593 | orchestrator | Friday 27 February 2026 00:51:34 +0000 (0:00:03.636) 0:00:21.631 ******* 2026-02-27 00:58:14.762606 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:58:14.762619 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:58:14.762632 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:58:14.762640 | orchestrator | 2026-02-27 00:58:14.762648 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-02-27 00:58:14.762656 | orchestrator | Friday 27 February 2026 00:51:37 +0000 (0:00:02.474) 0:00:24.105 ******* 2026-02-27 00:58:14.762664 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:58:14.762672 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:58:14.762751 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:58:14.762760 | orchestrator | 2026-02-27 00:58:14.762768 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-02-27 00:58:14.762776 | orchestrator | Friday 27 February 2026 00:51:39 +0000 (0:00:01.977) 0:00:26.083 ******* 2026-02-27 00:58:14.762819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-27 00:58:14.762859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-27 00:58:14.762870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-27 00:58:14.762884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__da7bcc3587dcedd5a2549fe3261c5774b177e3d0', '__omit_place_holder__da7bcc3587dcedd5a2549fe3261c5774b177e3d0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-27 00:58:14.762893 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.762901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-27 00:58:14.762974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-27 00:58:14.762985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-27 00:58:14.763005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__da7bcc3587dcedd5a2549fe3261c5774b177e3d0', '__omit_place_holder__da7bcc3587dcedd5a2549fe3261c5774b177e3d0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-27 00:58:14.763014 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.763022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-27 00:58:14.763035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-27 00:58:14.763043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-27 00:58:14.763051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__da7bcc3587dcedd5a2549fe3261c5774b177e3d0', '__omit_place_holder__da7bcc3587dcedd5a2549fe3261c5774b177e3d0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-27 00:58:14.763060 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.763067 | orchestrator | 2026-02-27 00:58:14.763075 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-02-27 00:58:14.763083 | orchestrator | Friday 27 February 2026 00:51:41 +0000 (0:00:02.458) 0:00:28.542 ******* 2026-02-27 00:58:14.763092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-27 00:58:14.763109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-27 00:58:14.763118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-27 00:58:14.763130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-27 00:58:14.763139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-27 00:58:14.763147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__da7bcc3587dcedd5a2549fe3261c5774b177e3d0', '__omit_place_holder__da7bcc3587dcedd5a2549fe3261c5774b177e3d0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-27 00:58:14.763158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-27 00:58:14.763179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-27 00:58:14.763209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__da7bcc3587dcedd5a2549fe3261c5774b177e3d0', '__omit_place_holder__da7bcc3587dcedd5a2549fe3261c5774b177e3d0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-27 00:58:14.763225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-27 00:58:14.763245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-27 00:58:14.763259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__da7bcc3587dcedd5a2549fe3261c5774b177e3d0', '__omit_place_holder__da7bcc3587dcedd5a2549fe3261c5774b177e3d0'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-27 00:58:14.763272 | orchestrator | 2026-02-27 00:58:14.763286 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-02-27 00:58:14.763298 | orchestrator | Friday 27 February 2026 00:51:47 +0000 (0:00:05.402) 0:00:33.945 ******* 2026-02-27 00:58:14.763311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-27 00:58:14.763338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-27 00:58:14.763363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-27 00:58:14.763377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-27 00:58:14.763398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-27 00:58:14.763412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-27 00:58:14.763425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-27 00:58:14.763448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-27 00:58:14.763463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-27 00:58:14.763476 | orchestrator | 2026-02-27 00:58:14.763489 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-02-27 00:58:14.763502 | orchestrator | Friday 27 February 2026 00:51:50 +0000 (0:00:03.795) 0:00:37.740 ******* 2026-02-27 00:58:14.763516 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-27 00:58:14.763537 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-27 00:58:14.763551 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-27 00:58:14.763564 | orchestrator | 2026-02-27 00:58:14.763576 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-02-27 00:58:14.763589 | orchestrator | Friday 27 February 2026 00:51:54 +0000 (0:00:03.188) 0:00:40.929 ******* 2026-02-27 00:58:14.763767 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-27 00:58:14.763826 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-27 00:58:14.763842 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-27 00:58:14.763855 | orchestrator | 2026-02-27 00:58:14.763868 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-02-27 00:58:14.763881 | orchestrator | Friday 27 February 2026 00:52:01 +0000 (0:00:07.632) 0:00:48.562 ******* 2026-02-27 00:58:14.763894 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.763907 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.763921 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.763929 | orchestrator | 2026-02-27 00:58:14.763937 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-02-27 00:58:14.763945 | orchestrator | Friday 27 February 2026 00:52:02 +0000 (0:00:00.686) 0:00:49.248 ******* 2026-02-27 00:58:14.763954 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-27 00:58:14.763967 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-27 00:58:14.763980 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-27 00:58:14.763992 | orchestrator | 2026-02-27 00:58:14.764005 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-02-27 00:58:14.764030 | orchestrator | Friday 27 February 2026 00:52:05 +0000 (0:00:02.990) 0:00:52.239 ******* 2026-02-27 00:58:14.764043 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-27 00:58:14.764056 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-27 00:58:14.764069 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-27 00:58:14.764082 | orchestrator | 2026-02-27 00:58:14.764095 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-02-27 00:58:14.764108 | orchestrator | Friday 27 February 2026 00:52:08 +0000 (0:00:03.419) 0:00:55.659 ******* 2026-02-27 00:58:14.764121 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-02-27 00:58:14.764169 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-02-27 00:58:14.764185 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-02-27 00:58:14.764198 | orchestrator | 2026-02-27 00:58:14.764212 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-02-27 00:58:14.764225 | orchestrator | Friday 27 February 2026 00:52:11 +0000 (0:00:02.173) 0:00:57.832 ******* 2026-02-27 00:58:14.764238 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-02-27 00:58:14.764251 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-02-27 00:58:14.764265 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-02-27 00:58:14.764278 | orchestrator | 2026-02-27 00:58:14.764292 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-27 00:58:14.764304 | orchestrator | Friday 27 February 2026 00:52:13 +0000 (0:00:01.938) 0:00:59.770 ******* 2026-02-27 00:58:14.764318 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:58:14.764332 | orchestrator | 2026-02-27 00:58:14.764344 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-02-27 00:58:14.764358 | orchestrator | Friday 27 February 2026 00:52:14 +0000 (0:00:01.613) 0:01:01.384 ******* 2026-02-27 00:58:14.764370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-27 00:58:14.764395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-27 00:58:14.764410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-27 00:58:14.764443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-27 00:58:14.764459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-27 00:58:14.764473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-27 00:58:14.764488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-27 00:58:14.764501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-27 00:58:14.764525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-27 00:58:14.764539 | orchestrator | 2026-02-27 00:58:14.764552 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-02-27 00:58:14.764565 | orchestrator | Friday 27 February 2026 00:52:18 +0000 (0:00:03.814) 0:01:05.198 ******* 2026-02-27 00:58:14.764591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-27 00:58:14.764605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-27 00:58:14.764619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-27 00:58:14.764632 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.764646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-27 00:58:14.764819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-27 00:58:14.764852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-27 00:58:14.764867 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.764881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-27 00:58:14.764915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-27 00:58:14.764931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-27 00:58:14.764945 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.764959 | orchestrator | 2026-02-27 00:58:14.764972 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-02-27 00:58:14.764986 | orchestrator | Friday 27 February 2026 00:52:19 +0000 (0:00:00.891) 0:01:06.090 ******* 2026-02-27 00:58:14.764999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-27 00:58:14.765014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-27 00:58:14.765029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-27 00:58:14.765038 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.765050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-27 00:58:14.765062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-27 00:58:14.765071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-27 00:58:14.765079 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.765087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-27 00:58:14.765095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-27 00:58:14.765103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-27 00:58:14.765111 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.765119 | orchestrator | 2026-02-27 00:58:14.765127 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-27 00:58:14.765141 | orchestrator | Friday 27 February 2026 00:52:20 +0000 (0:00:01.412) 0:01:07.502 ******* 2026-02-27 00:58:14.765154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-27 00:58:14.765163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-27 00:58:14.765175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-27 00:58:14.765183 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.765191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-27 00:58:14.765200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-27 00:58:14.765208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-27 00:58:14.765229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-27 00:58:14.765238 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.765246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-27 00:58:14.765258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-27 00:58:14.765266 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.765274 | orchestrator | 2026-02-27 00:58:14.765282 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-27 00:58:14.765290 | orchestrator | Friday 27 February 2026 00:52:22 +0000 (0:00:02.108) 0:01:09.611 ******* 2026-02-27 00:58:14.765299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-27 00:58:14.765307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-27 00:58:14.765315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-27 00:58:14.765328 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.765336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-27 00:58:14.765351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-27 00:58:14.765393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-27 00:58:14.765402 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.765414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-27 00:58:14.765423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-27 00:58:14.765431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-27 00:58:14.765440 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.765453 | orchestrator | 2026-02-27 00:58:14.765461 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-27 00:58:14.765469 | orchestrator | Friday 27 February 2026 00:52:24 +0000 (0:00:01.950) 0:01:11.561 ******* 2026-02-27 00:58:14.765477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-27 00:58:14.767013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-27 00:58:14.767045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-27 00:58:14.767053 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.767067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-27 00:58:14.767075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-27 00:58:14.767082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-27 00:58:14.767101 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.767112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-27 00:58:14.767132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-27 00:58:14.767145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-27 00:58:14.767156 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.767204 | orchestrator | 2026-02-27 00:58:14.767216 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-02-27 00:58:14.767228 | orchestrator | Friday 27 February 2026 00:52:25 +0000 (0:00:00.881) 0:01:12.442 ******* 2026-02-27 00:58:14.767245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-27 00:58:14.767257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-27 00:58:14.767269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-27 00:58:14.767286 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.767293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-27 00:58:14.767300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-27 00:58:14.767314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-27 00:58:14.767321 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.767328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-27 00:58:14.767339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-27 00:58:14.767346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-27 00:58:14.767358 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.767365 | orchestrator | 2026-02-27 00:58:14.767372 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-02-27 00:58:14.767379 | orchestrator | Friday 27 February 2026 00:52:26 +0000 (0:00:00.942) 0:01:13.385 ******* 2026-02-27 00:58:14.767386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-27 00:58:14.767393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-27 00:58:14.767406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-27 00:58:14.767413 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.767420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-27 00:58:14.767430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-27 00:58:14.767438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-27 00:58:14.767450 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.767457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-27 00:58:14.767464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-27 00:58:14.767471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-27 00:58:14.767478 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.767484 | orchestrator | 2026-02-27 00:58:14.767491 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-02-27 00:58:14.767502 | orchestrator | Friday 27 February 2026 00:52:27 +0000 (0:00:00.864) 0:01:14.249 ******* 2026-02-27 00:58:14.767509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-27 00:58:14.767520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-27 00:58:14.767607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-27 00:58:14.767623 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.767631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-27 00:58:14.767639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-27 00:58:14.767647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-27 00:58:14.767655 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.767694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-27 00:58:14.767703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-27 00:58:14.767715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-27 00:58:14.767728 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.767736 | orchestrator | 2026-02-27 00:58:14.767744 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-02-27 00:58:14.767751 | orchestrator | Friday 27 February 2026 00:52:28 +0000 (0:00:00.889) 0:01:15.139 ******* 2026-02-27 00:58:14.767759 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-27 00:58:14.767767 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-27 00:58:14.767775 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-27 00:58:14.767804 | orchestrator | 2026-02-27 00:58:14.767816 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-02-27 00:58:14.767824 | orchestrator | Friday 27 February 2026 00:52:30 +0000 (0:00:02.141) 0:01:17.281 ******* 2026-02-27 00:58:14.767832 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-27 00:58:14.767839 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-27 00:58:14.767847 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-27 00:58:14.767854 | orchestrator | 2026-02-27 00:58:14.767862 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-02-27 00:58:14.767870 | orchestrator | Friday 27 February 2026 00:52:32 +0000 (0:00:01.769) 0:01:19.050 ******* 2026-02-27 00:58:14.767877 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-27 00:58:14.767885 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-27 00:58:14.767893 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-27 00:58:14.767901 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.767909 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-27 00:58:14.767916 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-27 00:58:14.767924 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.767932 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-27 00:58:14.767939 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.767947 | orchestrator | 2026-02-27 00:58:14.767955 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-02-27 00:58:14.767962 | orchestrator | Friday 27 February 2026 00:52:33 +0000 (0:00:01.428) 0:01:20.478 ******* 2026-02-27 00:58:14.767975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-27 00:58:14.767982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-27 00:58:14.768001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-27 00:58:14.768009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-27 00:58:14.768016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-27 00:58:14.768023 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-27 00:58:14.768030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-27 00:58:14.768041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-27 00:58:14.768052 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-27 00:58:14.768059 | orchestrator | 2026-02-27 00:58:14.768066 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-02-27 00:58:14.768073 | orchestrator | Friday 27 February 2026 00:52:36 +0000 (0:00:03.005) 0:01:23.484 ******* 2026-02-27 00:58:14.768080 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:58:14.768086 | orchestrator | 2026-02-27 00:58:14.768096 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-02-27 00:58:14.768102 | orchestrator | Friday 27 February 2026 00:52:37 +0000 (0:00:00.795) 0:01:24.280 ******* 2026-02-27 00:58:14.768111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-27 00:58:14.768119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-27 00:58:14.768127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.768134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.768145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-27 00:58:14.768160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-27 00:58:14.768167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.768174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.768181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-27 00:58:14.768188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-27 00:58:14.768234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.768242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.768249 | orchestrator | 2026-02-27 00:58:14.768256 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-02-27 00:58:14.768263 | orchestrator | Friday 27 February 2026 00:52:42 +0000 (0:00:04.899) 0:01:29.179 ******* 2026-02-27 00:58:14.768273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-27 00:58:14.768281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-27 00:58:14.768288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.768295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.768350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-27 00:58:14.768358 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.768369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-27 00:58:14.768376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.768383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.768390 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.768397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-27 00:58:14.768409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-27 00:58:14.768421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.768433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.768444 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.768455 | orchestrator | 2026-02-27 00:58:14.768466 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-02-27 00:58:14.768477 | orchestrator | Friday 27 February 2026 00:52:44 +0000 (0:00:01.585) 0:01:30.764 ******* 2026-02-27 00:58:14.768487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-27 00:58:14.768499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-27 00:58:14.768510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-27 00:58:14.768521 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.768532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-27 00:58:14.768543 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.768554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-27 00:58:14.768590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-27 00:58:14.768603 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.768613 | orchestrator | 2026-02-27 00:58:14.768623 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-02-27 00:58:14.768633 | orchestrator | Friday 27 February 2026 00:52:45 +0000 (0:00:01.078) 0:01:31.843 ******* 2026-02-27 00:58:14.768643 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:58:14.768654 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:58:14.768673 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:58:14.768685 | orchestrator | 2026-02-27 00:58:14.768696 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-02-27 00:58:14.768707 | orchestrator | Friday 27 February 2026 00:52:46 +0000 (0:00:01.479) 0:01:33.322 ******* 2026-02-27 00:58:14.768714 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:58:14.768721 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:58:14.768762 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:58:14.768770 | orchestrator | 2026-02-27 00:58:14.768777 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-02-27 00:58:14.768924 | orchestrator | Friday 27 February 2026 00:52:48 +0000 (0:00:02.200) 0:01:35.523 ******* 2026-02-27 00:58:14.768940 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:58:14.768950 | orchestrator | 2026-02-27 00:58:14.768960 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-02-27 00:58:14.769066 | orchestrator | Friday 27 February 2026 00:52:49 +0000 (0:00:00.964) 0:01:36.487 ******* 2026-02-27 00:58:14.769095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-27 00:58:14.769109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.769130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.769143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-27 00:58:14.769167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.769180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.769197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-27 00:58:14.769214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.769225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.769235 | orchestrator | 2026-02-27 00:58:14.769246 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-02-27 00:58:14.769278 | orchestrator | Friday 27 February 2026 00:52:54 +0000 (0:00:05.025) 0:01:41.512 ******* 2026-02-27 00:58:14.769291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-27 00:58:14.769305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.769324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.769337 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.769355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-27 00:58:14.769370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.769389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.769401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-27 00:58:14.769413 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.769433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.769445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.769457 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.769468 | orchestrator | 2026-02-27 00:58:14.769479 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-02-27 00:58:14.769489 | orchestrator | Friday 27 February 2026 00:52:55 +0000 (0:00:00.674) 0:01:42.187 ******* 2026-02-27 00:58:14.769500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-27 00:58:14.769516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-27 00:58:14.769526 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.769537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-27 00:58:14.769556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-27 00:58:14.769568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-27 00:58:14.769580 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.769592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-27 00:58:14.769604 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.769615 | orchestrator | 2026-02-27 00:58:14.769626 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-02-27 00:58:14.769636 | orchestrator | Friday 27 February 2026 00:52:56 +0000 (0:00:01.106) 0:01:43.294 ******* 2026-02-27 00:58:14.769647 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:58:14.769658 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:58:14.769727 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:58:14.769739 | orchestrator | 2026-02-27 00:58:14.769824 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-02-27 00:58:14.769835 | orchestrator | Friday 27 February 2026 00:52:57 +0000 (0:00:01.440) 0:01:44.734 ******* 2026-02-27 00:58:14.769846 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:58:14.769856 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:58:14.769866 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:58:14.769877 | orchestrator | 2026-02-27 00:58:14.769888 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-02-27 00:58:14.769900 | orchestrator | Friday 27 February 2026 00:53:00 +0000 (0:00:02.405) 0:01:47.140 ******* 2026-02-27 00:58:14.769912 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.769922 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.769933 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.769943 | orchestrator | 2026-02-27 00:58:14.769954 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-02-27 00:58:14.770076 | orchestrator | Friday 27 February 2026 00:53:00 +0000 (0:00:00.374) 0:01:47.515 ******* 2026-02-27 00:58:14.770092 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:58:14.770104 | orchestrator | 2026-02-27 00:58:14.770114 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-02-27 00:58:14.770125 | orchestrator | Friday 27 February 2026 00:53:01 +0000 (0:00:01.093) 0:01:48.608 ******* 2026-02-27 00:58:14.770150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-27 00:58:14.770164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-27 00:58:14.770193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-27 00:58:14.770205 | orchestrator | 2026-02-27 00:58:14.770244 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-02-27 00:58:14.770292 | orchestrator | Friday 27 February 2026 00:53:05 +0000 (0:00:03.432) 0:01:52.040 ******* 2026-02-27 00:58:14.770305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-27 00:58:14.770316 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.770348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-27 00:58:14.770360 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.770385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-27 00:58:14.770407 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.770417 | orchestrator | 2026-02-27 00:58:14.770426 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-02-27 00:58:14.770437 | orchestrator | Friday 27 February 2026 00:53:07 +0000 (0:00:02.615) 0:01:54.655 ******* 2026-02-27 00:58:14.770449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-27 00:58:14.770467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-27 00:58:14.770478 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.770513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-27 00:58:14.770524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-27 00:58:14.770537 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.770553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-27 00:58:14.770564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-27 00:58:14.770574 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.770585 | orchestrator | 2026-02-27 00:58:14.770595 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-02-27 00:58:14.770604 | orchestrator | Friday 27 February 2026 00:53:10 +0000 (0:00:02.196) 0:01:56.852 ******* 2026-02-27 00:58:14.770614 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.770624 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.770635 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.770645 | orchestrator | 2026-02-27 00:58:14.770654 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-02-27 00:58:14.770672 | orchestrator | Friday 27 February 2026 00:53:10 +0000 (0:00:00.851) 0:01:57.704 ******* 2026-02-27 00:58:14.770682 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.770692 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.770703 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.770895 | orchestrator | 2026-02-27 00:58:14.770902 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-02-27 00:58:14.770918 | orchestrator | Friday 27 February 2026 00:53:12 +0000 (0:00:01.527) 0:01:59.231 ******* 2026-02-27 00:58:14.770928 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:58:14.770935 | orchestrator | 2026-02-27 00:58:14.770941 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-02-27 00:58:14.770951 | orchestrator | Friday 27 February 2026 00:53:13 +0000 (0:00:00.807) 0:02:00.039 ******* 2026-02-27 00:58:14.770963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-27 00:58:14.770983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.770992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.771000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.771012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-27 00:58:14.771027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.771037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.771045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.771053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-27 00:58:14.771060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.771080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.771088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.771095 | orchestrator | 2026-02-27 00:58:14.771102 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-02-27 00:58:14.771109 | orchestrator | Friday 27 February 2026 00:53:18 +0000 (0:00:04.825) 0:02:04.864 ******* 2026-02-27 00:58:14.771120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-27 00:58:14.771127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.771135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.771153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.771159 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.771166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-27 00:58:14.771175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.771182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.771188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.771199 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.771205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-27 00:58:14.771217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.771224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.771233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.771240 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.771246 | orchestrator | 2026-02-27 00:58:14.771252 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-02-27 00:58:14.771258 | orchestrator | Friday 27 February 2026 00:53:20 +0000 (0:00:01.958) 0:02:06.822 ******* 2026-02-27 00:58:14.771265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-27 00:58:14.771272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-27 00:58:14.771283 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.771289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-27 00:58:14.771323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-27 00:58:14.771330 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.771336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-27 00:58:14.771341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-27 00:58:14.771347 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.771352 | orchestrator | 2026-02-27 00:58:14.771373 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-02-27 00:58:14.771378 | orchestrator | Friday 27 February 2026 00:53:22 +0000 (0:00:02.047) 0:02:08.870 ******* 2026-02-27 00:58:14.771384 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:58:14.771389 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:58:14.771394 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:58:14.771400 | orchestrator | 2026-02-27 00:58:14.771405 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-02-27 00:58:14.771412 | orchestrator | Friday 27 February 2026 00:53:23 +0000 (0:00:01.691) 0:02:10.562 ******* 2026-02-27 00:58:14.771421 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:58:14.771429 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:58:14.771434 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:58:14.771440 | orchestrator | 2026-02-27 00:58:14.771449 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-02-27 00:58:14.771455 | orchestrator | Friday 27 February 2026 00:53:26 +0000 (0:00:02.973) 0:02:13.536 ******* 2026-02-27 00:58:14.771460 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.771465 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.771471 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.771476 | orchestrator | 2026-02-27 00:58:14.771482 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-02-27 00:58:14.771487 | orchestrator | Friday 27 February 2026 00:53:27 +0000 (0:00:00.554) 0:02:14.090 ******* 2026-02-27 00:58:14.771492 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.771498 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.771503 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.771508 | orchestrator | 2026-02-27 00:58:14.771514 | orchestrator | TASK [include_role : designate] ************************************************ 2026-02-27 00:58:14.771519 | orchestrator | Friday 27 February 2026 00:53:27 +0000 (0:00:00.480) 0:02:14.570 ******* 2026-02-27 00:58:14.771525 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:58:14.771530 | orchestrator | 2026-02-27 00:58:14.771535 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-02-27 00:58:14.771541 | orchestrator | Friday 27 February 2026 00:53:28 +0000 (0:00:00.869) 0:02:15.440 ******* 2026-02-27 00:58:14.771550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-27 00:58:14.771561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-27 00:58:14.771566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.771572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-27 00:58:14.771582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.771588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-27 00:58:14.771600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.771606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.771612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.771617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.771627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.771632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.771641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.771680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.771687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-27 00:58:14.771693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-27 00:58:14.771698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.771709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.771724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.771735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.771741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.771746 | orchestrator | 2026-02-27 00:58:14.771751 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-02-27 00:58:14.771757 | orchestrator | Friday 27 February 2026 00:53:34 +0000 (0:00:06.170) 0:02:21.611 ******* 2026-02-27 00:58:14.771763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-27 00:58:14.771771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-27 00:58:14.771777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.771812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-27 00:58:14.771835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-27 00:58:14.771845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.771853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.771859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.771869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.771882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.771891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.771897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.771903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.771908 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.771914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.771919 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.771929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-27 00:58:14.771939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-27 00:58:14.771947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.771953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.771959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.771964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.771973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.771983 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.771988 | orchestrator | 2026-02-27 00:58:14.771994 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-02-27 00:58:14.771999 | orchestrator | Friday 27 February 2026 00:53:35 +0000 (0:00:01.013) 0:02:22.624 ******* 2026-02-27 00:58:14.772006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-27 00:58:14.772020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-27 00:58:14.772026 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.772031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-27 00:58:14.772037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-27 00:58:14.772042 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.772054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-27 00:58:14.772063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-27 00:58:14.772072 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.772080 | orchestrator | 2026-02-27 00:58:14.772088 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-02-27 00:58:14.772097 | orchestrator | Friday 27 February 2026 00:53:37 +0000 (0:00:01.448) 0:02:24.072 ******* 2026-02-27 00:58:14.772105 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:58:14.772113 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:58:14.772121 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:58:14.772129 | orchestrator | 2026-02-27 00:58:14.772138 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-02-27 00:58:14.772148 | orchestrator | Friday 27 February 2026 00:53:39 +0000 (0:00:02.099) 0:02:26.171 ******* 2026-02-27 00:58:14.772157 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:58:14.772165 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:58:14.772173 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:58:14.772178 | orchestrator | 2026-02-27 00:58:14.772184 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-02-27 00:58:14.772189 | orchestrator | Friday 27 February 2026 00:53:41 +0000 (0:00:02.432) 0:02:28.604 ******* 2026-02-27 00:58:14.772195 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.772200 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.772205 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.772211 | orchestrator | 2026-02-27 00:58:14.772216 | orchestrator | TASK [include_role : glance] *************************************************** 2026-02-27 00:58:14.772222 | orchestrator | Friday 27 February 2026 00:53:42 +0000 (0:00:00.582) 0:02:29.187 ******* 2026-02-27 00:58:14.772227 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:58:14.772232 | orchestrator | 2026-02-27 00:58:14.772238 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-02-27 00:58:14.772243 | orchestrator | Friday 27 February 2026 00:53:43 +0000 (0:00:00.834) 0:02:30.021 ******* 2026-02-27 00:58:14.772256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-27 00:58:14.772272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-27 00:58:14.772279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-27 00:58:14.772294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-27 00:58:14.772301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-27 00:58:14.772727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-27 00:58:14.772748 | orchestrator | 2026-02-27 00:58:14.772754 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-02-27 00:58:14.772759 | orchestrator | Friday 27 February 2026 00:53:49 +0000 (0:00:06.063) 0:02:36.085 ******* 2026-02-27 00:58:14.772765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-27 00:58:14.772831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-27 00:58:14.772839 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.772849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-27 00:58:14.772866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-27 00:58:14.772872 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.772882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-27 00:58:14.772891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-27 00:58:14.772900 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.772905 | orchestrator | 2026-02-27 00:58:14.772910 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-02-27 00:58:14.772915 | orchestrator | Friday 27 February 2026 00:53:52 +0000 (0:00:03.390) 0:02:39.476 ******* 2026-02-27 00:58:14.772920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-27 00:58:14.772928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-27 00:58:14.772933 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.772938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-27 00:58:14.772943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-27 00:58:14.772952 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.772957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-27 00:58:14.772962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-27 00:58:14.772967 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.772972 | orchestrator | 2026-02-27 00:58:14.772977 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-02-27 00:58:14.772982 | orchestrator | Friday 27 February 2026 00:53:56 +0000 (0:00:03.527) 0:02:43.003 ******* 2026-02-27 00:58:14.772987 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:58:14.772992 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:58:14.772997 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:58:14.773001 | orchestrator | 2026-02-27 00:58:14.773006 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-02-27 00:58:14.773011 | orchestrator | Friday 27 February 2026 00:53:57 +0000 (0:00:01.344) 0:02:44.348 ******* 2026-02-27 00:58:14.773016 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:58:14.773020 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:58:14.773025 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:58:14.773030 | orchestrator | 2026-02-27 00:58:14.773037 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-02-27 00:58:14.773042 | orchestrator | Friday 27 February 2026 00:53:59 +0000 (0:00:02.154) 0:02:46.502 ******* 2026-02-27 00:58:14.773047 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.773052 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.773056 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.773061 | orchestrator | 2026-02-27 00:58:14.773066 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-02-27 00:58:14.773071 | orchestrator | Friday 27 February 2026 00:54:00 +0000 (0:00:00.623) 0:02:47.125 ******* 2026-02-27 00:58:14.773076 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:58:14.773080 | orchestrator | 2026-02-27 00:58:14.773085 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-02-27 00:58:14.773090 | orchestrator | Friday 27 February 2026 00:54:01 +0000 (0:00:00.920) 0:02:48.046 ******* 2026-02-27 00:58:14.773098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-27 00:58:14.773107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-27 00:58:14.773112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-27 00:58:14.773118 | orchestrator | 2026-02-27 00:58:14.773122 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-02-27 00:58:14.773127 | orchestrator | Friday 27 February 2026 00:54:04 +0000 (0:00:03.646) 0:02:51.693 ******* 2026-02-27 00:58:14.773132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-27 00:58:14.773137 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.773172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-27 00:58:14.773178 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.773183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-27 00:58:14.773209 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.773219 | orchestrator | 2026-02-27 00:58:14.773224 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-02-27 00:58:14.773229 | orchestrator | Friday 27 February 2026 00:54:05 +0000 (0:00:00.705) 0:02:52.399 ******* 2026-02-27 00:58:14.773239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-27 00:58:14.773245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-27 00:58:14.773250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-27 00:58:14.773255 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.773260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-27 00:58:14.773265 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.773270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-27 00:58:14.773274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-27 00:58:14.773279 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.773284 | orchestrator | 2026-02-27 00:58:14.773289 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-02-27 00:58:14.773294 | orchestrator | Friday 27 February 2026 00:54:06 +0000 (0:00:00.705) 0:02:53.104 ******* 2026-02-27 00:58:14.773299 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:58:14.773303 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:58:14.773308 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:58:14.773359 | orchestrator | 2026-02-27 00:58:14.773365 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-02-27 00:58:14.773371 | orchestrator | Friday 27 February 2026 00:54:07 +0000 (0:00:01.221) 0:02:54.326 ******* 2026-02-27 00:58:14.773377 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:58:14.773382 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:58:14.773388 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:58:14.773393 | orchestrator | 2026-02-27 00:58:14.773399 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-02-27 00:58:14.773404 | orchestrator | Friday 27 February 2026 00:54:09 +0000 (0:00:02.188) 0:02:56.514 ******* 2026-02-27 00:58:14.773410 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.773415 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.773421 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.773426 | orchestrator | 2026-02-27 00:58:14.773438 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-02-27 00:58:14.773443 | orchestrator | Friday 27 February 2026 00:54:10 +0000 (0:00:00.570) 0:02:57.084 ******* 2026-02-27 00:58:14.773449 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:58:14.773454 | orchestrator | 2026-02-27 00:58:14.773460 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-02-27 00:58:14.773466 | orchestrator | Friday 27 February 2026 00:54:11 +0000 (0:00:00.981) 0:02:58.066 ******* 2026-02-27 00:58:14.773480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-27 00:58:14.773491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-27 00:58:14.773515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-27 00:58:14.773521 | orchestrator | 2026-02-27 00:58:14.773527 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-02-27 00:58:14.773533 | orchestrator | Friday 27 February 2026 00:54:15 +0000 (0:00:03.924) 0:03:01.990 ******* 2026-02-27 00:58:14.773542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-27 00:58:14.773552 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.773562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-27 00:58:14.773568 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.773578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-27 00:58:14.773587 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.773593 | orchestrator | 2026-02-27 00:58:14.773598 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-02-27 00:58:14.773604 | orchestrator | Friday 27 February 2026 00:54:16 +0000 (0:00:00.986) 0:03:02.977 ******* 2026-02-27 00:58:14.773611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-27 00:58:14.773620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-27 00:58:14.773626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-27 00:58:14.773632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-27 00:58:14.773637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-27 00:58:14.773643 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.773650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-27 00:58:14.773659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-27 00:58:14.773667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-27 00:58:14.773681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-27 00:58:14.773689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-27 00:58:14.773697 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.773709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-27 00:58:14.773717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-27 00:58:14.773726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-27 00:58:14.773731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-27 00:58:14.773738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-27 00:58:14.773743 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.773749 | orchestrator | 2026-02-27 00:58:14.773754 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-02-27 00:58:14.773758 | orchestrator | Friday 27 February 2026 00:54:17 +0000 (0:00:01.016) 0:03:03.993 ******* 2026-02-27 00:58:14.773763 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:58:14.773768 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:58:14.773773 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:58:14.773777 | orchestrator | 2026-02-27 00:58:14.773796 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-02-27 00:58:14.773801 | orchestrator | Friday 27 February 2026 00:54:18 +0000 (0:00:01.330) 0:03:05.324 ******* 2026-02-27 00:58:14.773806 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:58:14.773811 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:58:14.773816 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:58:14.773820 | orchestrator | 2026-02-27 00:58:14.773825 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-02-27 00:58:14.773830 | orchestrator | Friday 27 February 2026 00:54:20 +0000 (0:00:02.232) 0:03:07.557 ******* 2026-02-27 00:58:14.773835 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.773839 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.773844 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.773849 | orchestrator | 2026-02-27 00:58:14.773853 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-02-27 00:58:14.773858 | orchestrator | Friday 27 February 2026 00:54:21 +0000 (0:00:00.317) 0:03:07.874 ******* 2026-02-27 00:58:14.773863 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.773868 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.773879 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.773884 | orchestrator | 2026-02-27 00:58:14.773889 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-02-27 00:58:14.773893 | orchestrator | Friday 27 February 2026 00:54:21 +0000 (0:00:00.581) 0:03:08.456 ******* 2026-02-27 00:58:14.773898 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:58:14.773903 | orchestrator | 2026-02-27 00:58:14.773908 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-02-27 00:58:14.773934 | orchestrator | Friday 27 February 2026 00:54:22 +0000 (0:00:00.972) 0:03:09.429 ******* 2026-02-27 00:58:14.773941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-27 00:58:14.773950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-27 00:58:14.773956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-27 00:58:14.773964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-27 00:58:14.773974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-27 00:58:14.773979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-27 00:58:14.773987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-27 00:58:14.773993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-27 00:58:14.774054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-27 00:58:14.774063 | orchestrator | 2026-02-27 00:58:14.774068 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-02-27 00:58:14.774073 | orchestrator | Friday 27 February 2026 00:54:27 +0000 (0:00:04.464) 0:03:13.894 ******* 2026-02-27 00:58:14.774110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-27 00:58:14.774130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-27 00:58:14.774139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-27 00:58:14.774148 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.774176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-27 00:58:14.774190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-27 00:58:14.774198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-27 00:58:14.774215 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.774224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-27 00:58:14.774234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-27 00:58:14.774249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-27 00:58:14.774257 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.774266 | orchestrator | 2026-02-27 00:58:14.774274 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-02-27 00:58:14.774282 | orchestrator | Friday 27 February 2026 00:54:27 +0000 (0:00:00.742) 0:03:14.636 ******* 2026-02-27 00:58:14.774291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-27 00:58:14.774301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-27 00:58:14.774314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-27 00:58:14.774335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-27 00:58:14.774344 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.774353 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.774362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-27 00:58:14.774370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-27 00:58:14.774377 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.774385 | orchestrator | 2026-02-27 00:58:14.774393 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-02-27 00:58:14.774401 | orchestrator | Friday 27 February 2026 00:54:28 +0000 (0:00:00.959) 0:03:15.596 ******* 2026-02-27 00:58:14.774408 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:58:14.774415 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:58:14.774423 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:58:14.774431 | orchestrator | 2026-02-27 00:58:14.774438 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-02-27 00:58:14.774446 | orchestrator | Friday 27 February 2026 00:54:30 +0000 (0:00:01.587) 0:03:17.183 ******* 2026-02-27 00:58:14.774453 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:58:14.774461 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:58:14.774468 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:58:14.774473 | orchestrator | 2026-02-27 00:58:14.774478 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-02-27 00:58:14.774483 | orchestrator | Friday 27 February 2026 00:54:32 +0000 (0:00:02.328) 0:03:19.512 ******* 2026-02-27 00:58:14.774488 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.774492 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.774497 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.774502 | orchestrator | 2026-02-27 00:58:14.774506 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-02-27 00:58:14.774511 | orchestrator | Friday 27 February 2026 00:54:33 +0000 (0:00:00.586) 0:03:20.098 ******* 2026-02-27 00:58:14.774516 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:58:14.774521 | orchestrator | 2026-02-27 00:58:14.774525 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-02-27 00:58:14.774530 | orchestrator | Friday 27 February 2026 00:54:34 +0000 (0:00:01.050) 0:03:21.149 ******* 2026-02-27 00:58:14.774563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-27 00:58:14.774578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-27 00:58:14.774585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.774590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.774595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-27 00:58:14.774605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.774613 | orchestrator | 2026-02-27 00:58:14.774618 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-02-27 00:58:14.774623 | orchestrator | Friday 27 February 2026 00:54:38 +0000 (0:00:03.881) 0:03:25.030 ******* 2026-02-27 00:58:14.774631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-27 00:58:14.774677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.774682 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.774687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-27 00:58:14.774692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.774697 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.774706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-27 00:58:14.774719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.774724 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.774729 | orchestrator | 2026-02-27 00:58:14.774734 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-02-27 00:58:14.774739 | orchestrator | Friday 27 February 2026 00:54:39 +0000 (0:00:01.287) 0:03:26.318 ******* 2026-02-27 00:58:14.774776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-27 00:58:14.774802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-27 00:58:14.774807 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.774812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-27 00:58:14.774817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-27 00:58:14.774822 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.774827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-27 00:58:14.774832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-27 00:58:14.774836 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.774841 | orchestrator | 2026-02-27 00:58:14.774846 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-02-27 00:58:14.774851 | orchestrator | Friday 27 February 2026 00:54:40 +0000 (0:00:00.920) 0:03:27.239 ******* 2026-02-27 00:58:14.774856 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:58:14.774860 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:58:14.774865 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:58:14.774870 | orchestrator | 2026-02-27 00:58:14.774875 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-02-27 00:58:14.774884 | orchestrator | Friday 27 February 2026 00:54:41 +0000 (0:00:01.401) 0:03:28.640 ******* 2026-02-27 00:58:14.774889 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:58:14.774894 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:58:14.774898 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:58:14.774903 | orchestrator | 2026-02-27 00:58:14.774908 | orchestrator | TASK [include_role : manila] *************************************************** 2026-02-27 00:58:14.774913 | orchestrator | Friday 27 February 2026 00:54:44 +0000 (0:00:02.358) 0:03:30.998 ******* 2026-02-27 00:58:14.774918 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:58:14.774922 | orchestrator | 2026-02-27 00:58:14.774927 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-02-27 00:58:14.774932 | orchestrator | Friday 27 February 2026 00:54:45 +0000 (0:00:01.434) 0:03:32.433 ******* 2026-02-27 00:58:14.774941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-27 00:58:14.774950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.774956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.774961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.774967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-27 00:58:14.774978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-27 00:58:14.774984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.774992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.774997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.775002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.775013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.775022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.775027 | orchestrator | 2026-02-27 00:58:14.775033 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-02-27 00:58:14.775039 | orchestrator | Friday 27 February 2026 00:54:49 +0000 (0:00:03.411) 0:03:35.845 ******* 2026-02-27 00:58:14.775045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-27 00:58:14.775053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.775059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.775065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.775075 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.775081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-27 00:58:14.775090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.775096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.775105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.775111 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.775117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-27 00:58:14.775126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.775131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.775141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.775147 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.775153 | orchestrator | 2026-02-27 00:58:14.775166 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-02-27 00:58:14.775172 | orchestrator | Friday 27 February 2026 00:54:49 +0000 (0:00:00.634) 0:03:36.480 ******* 2026-02-27 00:58:14.775178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-27 00:58:14.775184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-27 00:58:14.775189 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.775195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-27 00:58:14.775204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-27 00:58:14.775210 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.775216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-27 00:58:14.775221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-27 00:58:14.775226 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.775236 | orchestrator | 2026-02-27 00:58:14.775242 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-02-27 00:58:14.775248 | orchestrator | Friday 27 February 2026 00:54:50 +0000 (0:00:01.088) 0:03:37.568 ******* 2026-02-27 00:58:14.775253 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:58:14.775259 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:58:14.775264 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:58:14.775270 | orchestrator | 2026-02-27 00:58:14.775282 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-02-27 00:58:14.775288 | orchestrator | Friday 27 February 2026 00:54:52 +0000 (0:00:01.345) 0:03:38.914 ******* 2026-02-27 00:58:14.775292 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:58:14.775297 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:58:14.775302 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:58:14.775307 | orchestrator | 2026-02-27 00:58:14.775312 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-02-27 00:58:14.775327 | orchestrator | Friday 27 February 2026 00:54:54 +0000 (0:00:02.276) 0:03:41.190 ******* 2026-02-27 00:58:14.775332 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:58:14.775337 | orchestrator | 2026-02-27 00:58:14.775341 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-02-27 00:58:14.775346 | orchestrator | Friday 27 February 2026 00:54:56 +0000 (0:00:01.700) 0:03:42.891 ******* 2026-02-27 00:58:14.775351 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-27 00:58:14.775356 | orchestrator | 2026-02-27 00:58:14.775361 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-02-27 00:58:14.775366 | orchestrator | Friday 27 February 2026 00:54:59 +0000 (0:00:03.030) 0:03:45.921 ******* 2026-02-27 00:58:14.775375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-27 00:58:14.775384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-27 00:58:14.775417 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.775423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-27 00:58:14.775428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-27 00:58:14.775434 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.775443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-27 00:58:14.775466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-27 00:58:14.775472 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.775476 | orchestrator | 2026-02-27 00:58:14.775481 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-02-27 00:58:14.775486 | orchestrator | Friday 27 February 2026 00:55:01 +0000 (0:00:02.739) 0:03:48.661 ******* 2026-02-27 00:58:14.775495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-27 00:58:14.775500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-27 00:58:14.775508 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.775516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-27 00:58:14.775521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-27 00:58:14.775526 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.775535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-27 00:58:14.775546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-27 00:58:14.775552 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.775556 | orchestrator | 2026-02-27 00:58:14.775561 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-02-27 00:58:14.775566 | orchestrator | Friday 27 February 2026 00:55:04 +0000 (0:00:02.878) 0:03:51.539 ******* 2026-02-27 00:58:14.775573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-27 00:58:14.775580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-27 00:58:14.775589 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.775599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-27 00:58:14.775611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-27 00:58:14.775619 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.775635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-27 00:58:14.775646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-27 00:58:14.775652 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.775656 | orchestrator | 2026-02-27 00:58:14.775661 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-02-27 00:58:14.775666 | orchestrator | Friday 27 February 2026 00:55:07 +0000 (0:00:03.105) 0:03:54.644 ******* 2026-02-27 00:58:14.775671 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:58:14.775675 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:58:14.775680 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:58:14.775685 | orchestrator | 2026-02-27 00:58:14.775690 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-02-27 00:58:14.775694 | orchestrator | Friday 27 February 2026 00:55:09 +0000 (0:00:01.936) 0:03:56.581 ******* 2026-02-27 00:58:14.775699 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.775704 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.775709 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.775713 | orchestrator | 2026-02-27 00:58:14.775718 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-02-27 00:58:14.775723 | orchestrator | Friday 27 February 2026 00:55:11 +0000 (0:00:01.631) 0:03:58.212 ******* 2026-02-27 00:58:14.775728 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.775732 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.775737 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.775742 | orchestrator | 2026-02-27 00:58:14.775747 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-02-27 00:58:14.775752 | orchestrator | Friday 27 February 2026 00:55:11 +0000 (0:00:00.324) 0:03:58.537 ******* 2026-02-27 00:58:14.775756 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:58:14.775761 | orchestrator | 2026-02-27 00:58:14.775766 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-02-27 00:58:14.775770 | orchestrator | Friday 27 February 2026 00:55:13 +0000 (0:00:01.456) 0:03:59.993 ******* 2026-02-27 00:58:14.775776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-27 00:58:14.775799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-27 00:58:14.775810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-27 00:58:14.775815 | orchestrator | 2026-02-27 00:58:14.775819 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-02-27 00:58:14.775824 | orchestrator | Friday 27 February 2026 00:55:14 +0000 (0:00:01.616) 0:04:01.610 ******* 2026-02-27 00:58:14.775832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-27 00:58:14.775837 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.775842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-27 00:58:14.775847 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.775852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-27 00:58:14.775860 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.775865 | orchestrator | 2026-02-27 00:58:14.775870 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-02-27 00:58:14.775875 | orchestrator | Friday 27 February 2026 00:55:15 +0000 (0:00:00.489) 0:04:02.099 ******* 2026-02-27 00:58:14.775880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-27 00:58:14.775885 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.775893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-27 00:58:14.775898 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.775903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-27 00:58:14.775908 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.775912 | orchestrator | 2026-02-27 00:58:14.775917 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-02-27 00:58:14.775922 | orchestrator | Friday 27 February 2026 00:55:16 +0000 (0:00:00.919) 0:04:03.019 ******* 2026-02-27 00:58:14.775927 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.775932 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.775936 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.775941 | orchestrator | 2026-02-27 00:58:14.775946 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-02-27 00:58:14.775951 | orchestrator | Friday 27 February 2026 00:55:16 +0000 (0:00:00.491) 0:04:03.511 ******* 2026-02-27 00:58:14.775955 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.775960 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.775965 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.775969 | orchestrator | 2026-02-27 00:58:14.775977 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-02-27 00:58:14.775982 | orchestrator | Friday 27 February 2026 00:55:18 +0000 (0:00:01.420) 0:04:04.932 ******* 2026-02-27 00:58:14.775987 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.775991 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.775996 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.776001 | orchestrator | 2026-02-27 00:58:14.776006 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-02-27 00:58:14.776010 | orchestrator | Friday 27 February 2026 00:55:18 +0000 (0:00:00.354) 0:04:05.286 ******* 2026-02-27 00:58:14.776015 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:58:14.776020 | orchestrator | 2026-02-27 00:58:14.776025 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-02-27 00:58:14.776029 | orchestrator | Friday 27 February 2026 00:55:20 +0000 (0:00:01.508) 0:04:06.794 ******* 2026-02-27 00:58:14.776034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-27 00:58:14.776043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.776055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.776065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.776082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-27 00:58:14.776090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.776104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-27 00:58:14.776113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-27 00:58:14.776123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.776440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-27 00:58:14.776456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-27 00:58:14.776462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.776473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.776478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-27 00:58:14.776486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-27 00:58:14.776492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.776500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.776505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.776514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-27 00:58:14.776522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-27 00:58:14.776528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-27 00:58:14.776535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-27 00:58:14.776541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.776549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.776554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.776562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-27 00:58:14.776568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.776578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-27 00:58:14.776583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-27 00:58:14.776591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.776596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.776603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-27 00:58:14.776609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-27 00:58:14.776617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.776622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-27 00:58:14.776630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-27 00:58:14.776636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.776641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-27 00:58:14.776648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.776653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-27 00:58:14.776661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.776669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-27 00:58:14.776674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-27 00:58:14.776679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.776687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-27 00:58:14.776693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-27 00:58:14.776706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-27 00:58:14.776714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-27 00:58:14.776720 | orchestrator | 2026-02-27 00:58:14.776725 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-02-27 00:58:14.776730 | orchestrator | Friday 27 February 2026 00:55:24 +0000 (0:00:04.495) 0:04:11.289 ******* 2026-02-27 00:58:14.776735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-27 00:58:14.776743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.776748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.776759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-27 00:58:14.776764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.776769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.776777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-27 00:58:14.776827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.776839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.776844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.776849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-27 00:58:14.776854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-27 00:58:14.776860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-27 00:58:14.776868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.776879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.776884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-27 00:58:14.776889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-27 00:58:14.776894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-27 00:58:14.776900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.776908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.776916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-27 00:58:14.776926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-27 00:58:14.776931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-27 00:58:14.776936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-27 00:58:14.776941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.776950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.776963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.776978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-27 00:58:14.776986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-27 00:58:14.776996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.777004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-27 00:58:14.777016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-27 00:58:14.777034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.777042 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.777051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.777060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-27 00:58:14.777065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-27 00:58:14.777073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.777082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-27 00:58:14.777090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-27 00:58:14.777095 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.777100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-27 00:58:14.777105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.777110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-27 00:58:14.777116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.777127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-27 00:58:14.777132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-27 00:58:14.777140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.777145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-27 00:58:14.777150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-27 00:58:14.777154 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.777159 | orchestrator | 2026-02-27 00:58:14.777164 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-02-27 00:58:14.777170 | orchestrator | Friday 27 February 2026 00:55:26 +0000 (0:00:01.554) 0:04:12.843 ******* 2026-02-27 00:58:14.777175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-27 00:58:14.777180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-27 00:58:14.777185 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.777192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-27 00:58:14.777197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-27 00:58:14.777201 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.777206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-27 00:58:14.777210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-27 00:58:14.777215 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.777219 | orchestrator | 2026-02-27 00:58:14.777224 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-02-27 00:58:14.777229 | orchestrator | Friday 27 February 2026 00:55:28 +0000 (0:00:02.107) 0:04:14.951 ******* 2026-02-27 00:58:14.777233 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:58:14.777238 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:58:14.777242 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:58:14.777247 | orchestrator | 2026-02-27 00:58:14.777252 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-02-27 00:58:14.777256 | orchestrator | Friday 27 February 2026 00:55:29 +0000 (0:00:01.368) 0:04:16.319 ******* 2026-02-27 00:58:14.777263 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:58:14.777268 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:58:14.777272 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:58:14.777277 | orchestrator | 2026-02-27 00:58:14.777281 | orchestrator | TASK [include_role : placement] ************************************************ 2026-02-27 00:58:14.777286 | orchestrator | Friday 27 February 2026 00:55:31 +0000 (0:00:02.375) 0:04:18.695 ******* 2026-02-27 00:58:14.777291 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:58:14.777295 | orchestrator | 2026-02-27 00:58:14.777300 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-02-27 00:58:14.777304 | orchestrator | Friday 27 February 2026 00:55:33 +0000 (0:00:01.302) 0:04:19.997 ******* 2026-02-27 00:58:14.777309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-27 00:58:14.777319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-27 00:58:14.777412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-27 00:58:14.777423 | orchestrator | 2026-02-27 00:58:14.777430 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-02-27 00:58:14.777437 | orchestrator | Friday 27 February 2026 00:55:36 +0000 (0:00:03.619) 0:04:23.616 ******* 2026-02-27 00:58:14.777450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-27 00:58:14.777458 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.777465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-27 00:58:14.777479 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.777486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-27 00:58:14.777492 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.777499 | orchestrator | 2026-02-27 00:58:14.777506 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-02-27 00:58:14.777513 | orchestrator | Friday 27 February 2026 00:55:37 +0000 (0:00:00.558) 0:04:24.175 ******* 2026-02-27 00:58:14.777521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-27 00:58:14.777529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-27 00:58:14.777537 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.777566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-27 00:58:14.777572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-27 00:58:14.777577 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.777581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-27 00:58:14.777586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-27 00:58:14.777591 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.777595 | orchestrator | 2026-02-27 00:58:14.777600 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-02-27 00:58:14.777604 | orchestrator | Friday 27 February 2026 00:55:38 +0000 (0:00:00.807) 0:04:24.982 ******* 2026-02-27 00:58:14.777609 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:58:14.777613 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:58:14.777618 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:58:14.777622 | orchestrator | 2026-02-27 00:58:14.777630 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-02-27 00:58:14.777635 | orchestrator | Friday 27 February 2026 00:55:40 +0000 (0:00:01.881) 0:04:26.864 ******* 2026-02-27 00:58:14.777640 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:58:14.777644 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:58:14.777648 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:58:14.777660 | orchestrator | 2026-02-27 00:58:14.777665 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-02-27 00:58:14.777669 | orchestrator | Friday 27 February 2026 00:55:41 +0000 (0:00:01.817) 0:04:28.682 ******* 2026-02-27 00:58:14.777674 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:58:14.777678 | orchestrator | 2026-02-27 00:58:14.777682 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-02-27 00:58:14.777687 | orchestrator | Friday 27 February 2026 00:55:43 +0000 (0:00:01.611) 0:04:30.293 ******* 2026-02-27 00:58:14.777692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-27 00:58:14.777698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.777716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.777725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-27 00:58:14.777734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.777739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.777744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-27 00:58:14.777761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.777767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.777775 | orchestrator | 2026-02-27 00:58:14.777795 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-02-27 00:58:14.777803 | orchestrator | Friday 27 February 2026 00:55:48 +0000 (0:00:04.626) 0:04:34.919 ******* 2026-02-27 00:58:14.777808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-27 00:58:14.777813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.777818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.777823 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.777844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-27 00:58:14.777853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.777858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.777863 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.777941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-27 00:58:14.777960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.777983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.777989 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.777998 | orchestrator | 2026-02-27 00:58:14.778003 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-02-27 00:58:14.778007 | orchestrator | Friday 27 February 2026 00:55:49 +0000 (0:00:01.334) 0:04:36.254 ******* 2026-02-27 00:58:14.778045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-27 00:58:14.778053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-27 00:58:14.778062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-27 00:58:14.778068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-27 00:58:14.778073 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.778079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-27 00:58:14.778084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-27 00:58:14.778089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-27 00:58:14.778095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-27 00:58:14.778100 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.778105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-27 00:58:14.778111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-27 00:58:14.778116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-27 00:58:14.778121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-27 00:58:14.778127 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.778132 | orchestrator | 2026-02-27 00:58:14.778137 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-02-27 00:58:14.778142 | orchestrator | Friday 27 February 2026 00:55:50 +0000 (0:00:01.029) 0:04:37.283 ******* 2026-02-27 00:58:14.778148 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:58:14.778153 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:58:14.778158 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:58:14.778163 | orchestrator | 2026-02-27 00:58:14.778169 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-02-27 00:58:14.778174 | orchestrator | Friday 27 February 2026 00:55:52 +0000 (0:00:01.577) 0:04:38.860 ******* 2026-02-27 00:58:14.778183 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:58:14.778188 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:58:14.778193 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:58:14.778199 | orchestrator | 2026-02-27 00:58:14.778219 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-02-27 00:58:14.778226 | orchestrator | Friday 27 February 2026 00:55:54 +0000 (0:00:02.323) 0:04:41.184 ******* 2026-02-27 00:58:14.778234 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:58:14.778242 | orchestrator | 2026-02-27 00:58:14.778252 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-02-27 00:58:14.778260 | orchestrator | Friday 27 February 2026 00:55:56 +0000 (0:00:01.651) 0:04:42.835 ******* 2026-02-27 00:58:14.778268 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-02-27 00:58:14.778276 | orchestrator | 2026-02-27 00:58:14.778284 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-02-27 00:58:14.778291 | orchestrator | Friday 27 February 2026 00:55:56 +0000 (0:00:00.898) 0:04:43.734 ******* 2026-02-27 00:58:14.778304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-27 00:58:14.778310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-27 00:58:14.778316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-27 00:58:14.778322 | orchestrator | 2026-02-27 00:58:14.778327 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-02-27 00:58:14.778333 | orchestrator | Friday 27 February 2026 00:56:02 +0000 (0:00:05.069) 0:04:48.803 ******* 2026-02-27 00:58:14.778338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-27 00:58:14.778343 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.778349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-27 00:58:14.778358 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.778364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-27 00:58:14.778370 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.778375 | orchestrator | 2026-02-27 00:58:14.778396 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-02-27 00:58:14.778402 | orchestrator | Friday 27 February 2026 00:56:03 +0000 (0:00:01.250) 0:04:50.053 ******* 2026-02-27 00:58:14.778407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-27 00:58:14.778412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-27 00:58:14.778417 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.778422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-27 00:58:14.778426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-27 00:58:14.778431 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.778439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-27 00:58:14.778444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-27 00:58:14.778448 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.778453 | orchestrator | 2026-02-27 00:58:14.778457 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-27 00:58:14.778462 | orchestrator | Friday 27 February 2026 00:56:04 +0000 (0:00:01.698) 0:04:51.752 ******* 2026-02-27 00:58:14.778466 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:58:14.778471 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:58:14.778475 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:58:14.778480 | orchestrator | 2026-02-27 00:58:14.778484 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-27 00:58:14.778489 | orchestrator | Friday 27 February 2026 00:56:07 +0000 (0:00:02.565) 0:04:54.317 ******* 2026-02-27 00:58:14.778493 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:58:14.778498 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:58:14.778502 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:58:14.778507 | orchestrator | 2026-02-27 00:58:14.778511 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-02-27 00:58:14.778519 | orchestrator | Friday 27 February 2026 00:56:11 +0000 (0:00:03.452) 0:04:57.769 ******* 2026-02-27 00:58:14.778524 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-02-27 00:58:14.778529 | orchestrator | 2026-02-27 00:58:14.778533 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-02-27 00:58:14.778538 | orchestrator | Friday 27 February 2026 00:56:12 +0000 (0:00:01.610) 0:04:59.379 ******* 2026-02-27 00:58:14.778543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-27 00:58:14.778547 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.778552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-27 00:58:14.778557 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.778575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-27 00:58:14.778581 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.778585 | orchestrator | 2026-02-27 00:58:14.778590 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-02-27 00:58:14.778594 | orchestrator | Friday 27 February 2026 00:56:13 +0000 (0:00:01.366) 0:05:00.746 ******* 2026-02-27 00:58:14.778600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-27 00:58:14.778608 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.778616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-27 00:58:14.778621 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.778625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-27 00:58:14.778633 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.778638 | orchestrator | 2026-02-27 00:58:14.778643 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-02-27 00:58:14.778647 | orchestrator | Friday 27 February 2026 00:56:15 +0000 (0:00:01.371) 0:05:02.118 ******* 2026-02-27 00:58:14.778652 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.778656 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.778661 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.778665 | orchestrator | 2026-02-27 00:58:14.778670 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-27 00:58:14.778674 | orchestrator | Friday 27 February 2026 00:56:17 +0000 (0:00:02.038) 0:05:04.156 ******* 2026-02-27 00:58:14.778679 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:58:14.778684 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:58:14.778688 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:58:14.778693 | orchestrator | 2026-02-27 00:58:14.778698 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-27 00:58:14.778702 | orchestrator | Friday 27 February 2026 00:56:20 +0000 (0:00:02.700) 0:05:06.857 ******* 2026-02-27 00:58:14.778707 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:58:14.778711 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:58:14.778716 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:58:14.778720 | orchestrator | 2026-02-27 00:58:14.778725 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-02-27 00:58:14.778730 | orchestrator | Friday 27 February 2026 00:56:23 +0000 (0:00:03.242) 0:05:10.100 ******* 2026-02-27 00:58:14.778734 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-02-27 00:58:14.778739 | orchestrator | 2026-02-27 00:58:14.778743 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-02-27 00:58:14.778748 | orchestrator | Friday 27 February 2026 00:56:24 +0000 (0:00:00.967) 0:05:11.068 ******* 2026-02-27 00:58:14.778766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-27 00:58:14.778772 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.778776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-27 00:58:14.778795 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.778800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-27 00:58:14.778808 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.778813 | orchestrator | 2026-02-27 00:58:14.778820 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-02-27 00:58:14.778825 | orchestrator | Friday 27 February 2026 00:56:25 +0000 (0:00:01.221) 0:05:12.289 ******* 2026-02-27 00:58:14.778830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-27 00:58:14.778834 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.778839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-27 00:58:14.778844 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.778849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-27 00:58:14.778853 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.778858 | orchestrator | 2026-02-27 00:58:14.778862 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-02-27 00:58:14.778867 | orchestrator | Friday 27 February 2026 00:56:26 +0000 (0:00:01.258) 0:05:13.548 ******* 2026-02-27 00:58:14.778871 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.778876 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.778881 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.778885 | orchestrator | 2026-02-27 00:58:14.778890 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-27 00:58:14.778894 | orchestrator | Friday 27 February 2026 00:56:28 +0000 (0:00:01.761) 0:05:15.310 ******* 2026-02-27 00:58:14.778899 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:58:14.778903 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:58:14.778908 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:58:14.778912 | orchestrator | 2026-02-27 00:58:14.778917 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-27 00:58:14.778921 | orchestrator | Friday 27 February 2026 00:56:31 +0000 (0:00:02.565) 0:05:17.876 ******* 2026-02-27 00:58:14.778926 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:58:14.778930 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:58:14.778935 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:58:14.778939 | orchestrator | 2026-02-27 00:58:14.778944 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-02-27 00:58:14.778948 | orchestrator | Friday 27 February 2026 00:56:34 +0000 (0:00:03.756) 0:05:21.632 ******* 2026-02-27 00:58:14.778966 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:58:14.778975 | orchestrator | 2026-02-27 00:58:14.778980 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-02-27 00:58:14.778984 | orchestrator | Friday 27 February 2026 00:56:36 +0000 (0:00:01.648) 0:05:23.281 ******* 2026-02-27 00:58:14.778989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-27 00:58:14.778996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-27 00:58:14.779001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-27 00:58:14.779008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-27 00:58:14.779012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.779031 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-27 00:58:14.779042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-27 00:58:14.779050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-27 00:58:14.779055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-27 00:58:14.779060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.779064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-27 00:58:14.779086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-27 00:58:14.779091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-27 00:58:14.779099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-27 00:58:14.779104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.779108 | orchestrator | 2026-02-27 00:58:14.779113 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-02-27 00:58:14.779118 | orchestrator | Friday 27 February 2026 00:56:40 +0000 (0:00:03.846) 0:05:27.127 ******* 2026-02-27 00:58:14.779123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-27 00:58:14.779127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-27 00:58:14.779149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-27 00:58:14.779155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-27 00:58:14.779163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.779168 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.779173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-27 00:58:14.779177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-27 00:58:14.779182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-27 00:58:14.779204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-27 00:58:14.779209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.779214 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.779221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-27 00:58:14.779226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-27 00:58:14.779231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-27 00:58:14.779236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-27 00:58:14.779257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-27 00:58:14.779263 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.779267 | orchestrator | 2026-02-27 00:58:14.779272 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-02-27 00:58:14.779277 | orchestrator | Friday 27 February 2026 00:56:41 +0000 (0:00:00.782) 0:05:27.909 ******* 2026-02-27 00:58:14.779281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-27 00:58:14.779286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-27 00:58:14.779291 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.779295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-27 00:58:14.779300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-27 00:58:14.779304 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.779312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-27 00:58:14.779316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-27 00:58:14.779321 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.779326 | orchestrator | 2026-02-27 00:58:14.779330 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-02-27 00:58:14.779334 | orchestrator | Friday 27 February 2026 00:56:42 +0000 (0:00:01.679) 0:05:29.589 ******* 2026-02-27 00:58:14.779339 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:58:14.779343 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:58:14.779348 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:58:14.779353 | orchestrator | 2026-02-27 00:58:14.779357 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-02-27 00:58:14.779362 | orchestrator | Friday 27 February 2026 00:56:44 +0000 (0:00:01.500) 0:05:31.090 ******* 2026-02-27 00:58:14.779366 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:58:14.779371 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:58:14.779375 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:58:14.779380 | orchestrator | 2026-02-27 00:58:14.779384 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-02-27 00:58:14.779392 | orchestrator | Friday 27 February 2026 00:56:46 +0000 (0:00:02.247) 0:05:33.338 ******* 2026-02-27 00:58:14.779396 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:58:14.779401 | orchestrator | 2026-02-27 00:58:14.779405 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-02-27 00:58:14.779410 | orchestrator | Friday 27 February 2026 00:56:48 +0000 (0:00:01.754) 0:05:35.092 ******* 2026-02-27 00:58:14.779415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-27 00:58:14.779433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-27 00:58:14.779438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-27 00:58:14.779446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-27 00:58:14.779455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-27 00:58:14.779474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-27 00:58:14.779479 | orchestrator | 2026-02-27 00:58:14.779484 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-02-27 00:58:14.779489 | orchestrator | Friday 27 February 2026 00:56:53 +0000 (0:00:05.283) 0:05:40.376 ******* 2026-02-27 00:58:14.779496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-27 00:58:14.779501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-27 00:58:14.779514 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.779523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-27 00:58:14.779552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-27 00:58:14.779561 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.779570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-27 00:58:14.779579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-27 00:58:14.779588 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.779593 | orchestrator | 2026-02-27 00:58:14.779598 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-02-27 00:58:14.779605 | orchestrator | Friday 27 February 2026 00:56:54 +0000 (0:00:00.722) 0:05:41.098 ******* 2026-02-27 00:58:14.779610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-27 00:58:14.779615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-27 00:58:14.779619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-27 00:58:14.779624 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.779629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-27 00:58:14.779633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-27 00:58:14.779638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-27 00:58:14.779643 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.779647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-27 00:58:14.779666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-27 00:58:14.779672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-27 00:58:14.779677 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.779681 | orchestrator | 2026-02-27 00:58:14.779686 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-02-27 00:58:14.779690 | orchestrator | Friday 27 February 2026 00:56:55 +0000 (0:00:00.941) 0:05:42.040 ******* 2026-02-27 00:58:14.779695 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.779699 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.779704 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.779708 | orchestrator | 2026-02-27 00:58:14.779713 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-02-27 00:58:14.779717 | orchestrator | Friday 27 February 2026 00:56:56 +0000 (0:00:00.844) 0:05:42.884 ******* 2026-02-27 00:58:14.779725 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.779730 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.779734 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.779739 | orchestrator | 2026-02-27 00:58:14.779743 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-02-27 00:58:14.779748 | orchestrator | Friday 27 February 2026 00:56:57 +0000 (0:00:01.415) 0:05:44.299 ******* 2026-02-27 00:58:14.779755 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:58:14.779759 | orchestrator | 2026-02-27 00:58:14.779764 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-02-27 00:58:14.779768 | orchestrator | Friday 27 February 2026 00:56:59 +0000 (0:00:01.498) 0:05:45.798 ******* 2026-02-27 00:58:14.779773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-27 00:58:14.779778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-27 00:58:14.779818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:58:14.779823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:58:14.779843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-27 00:58:14.779849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-27 00:58:14.779867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-27 00:58:14.779872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:58:14.779877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:58:14.779882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-27 00:58:14.779887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-27 00:58:14.779904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-27 00:58:14.779916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:58:14.779923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:58:14.779928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-27 00:58:14.779933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-27 00:58:14.779938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-27 00:58:14.779946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:58:14.779956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:58:14.779963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-27 00:58:14.779968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-27 00:58:14.779973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-27 00:58:14.779978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:58:14.779987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:58:14.779995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-27 00:58:14.780003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-27 00:58:14.780008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-27 00:58:14.780013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:58:14.780017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:58:14.780022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-27 00:58:14.780030 | orchestrator | 2026-02-27 00:58:14.780037 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-02-27 00:58:14.780042 | orchestrator | Friday 27 February 2026 00:57:04 +0000 (0:00:05.231) 0:05:51.030 ******* 2026-02-27 00:58:14.780047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-27 00:58:14.780054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-27 00:58:14.780059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:58:14.780063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:58:14.780068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-27 00:58:14.780073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-27 00:58:14.780083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-27 00:58:14.780092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-27 00:58:14.780097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:58:14.780101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-27 00:58:14.780106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:58:14.780110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:58:14.780119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-27 00:58:14.780124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:58:14.780130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-27 00:58:14.780135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-27 00:58:14.780139 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.780144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-27 00:58:14.780148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:58:14.780158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:58:14.780163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-27 00:58:14.780169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-27 00:58:14.780173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-27 00:58:14.780178 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.780182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:58:14.780186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:58:14.780191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-27 00:58:14.780200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-27 00:58:14.780207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-27 00:58:14.780212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:58:14.780216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 00:58:14.780220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-27 00:58:14.780227 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.780231 | orchestrator | 2026-02-27 00:58:14.780235 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-02-27 00:58:14.780240 | orchestrator | Friday 27 February 2026 00:57:05 +0000 (0:00:00.909) 0:05:51.939 ******* 2026-02-27 00:58:14.780244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-27 00:58:14.780248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-27 00:58:14.780252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-27 00:58:14.780257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-27 00:58:14.780263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-27 00:58:14.780268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-27 00:58:14.780272 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.780277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-27 00:58:14.780281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-27 00:58:14.780285 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.780291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-27 00:58:14.780296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-27 00:58:14.780300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-27 00:58:14.780304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-27 00:58:14.780309 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.780313 | orchestrator | 2026-02-27 00:58:14.780317 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-02-27 00:58:14.780323 | orchestrator | Friday 27 February 2026 00:57:06 +0000 (0:00:01.038) 0:05:52.977 ******* 2026-02-27 00:58:14.780327 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.780331 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.780335 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.780340 | orchestrator | 2026-02-27 00:58:14.780344 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-02-27 00:58:14.780348 | orchestrator | Friday 27 February 2026 00:57:06 +0000 (0:00:00.483) 0:05:53.460 ******* 2026-02-27 00:58:14.780352 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.780356 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.780360 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.780364 | orchestrator | 2026-02-27 00:58:14.780368 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-02-27 00:58:14.780372 | orchestrator | Friday 27 February 2026 00:57:08 +0000 (0:00:01.580) 0:05:55.041 ******* 2026-02-27 00:58:14.780376 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:58:14.780380 | orchestrator | 2026-02-27 00:58:14.780385 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-02-27 00:58:14.780389 | orchestrator | Friday 27 February 2026 00:57:10 +0000 (0:00:01.949) 0:05:56.990 ******* 2026-02-27 00:58:14.780395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-27 00:58:14.780400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-27 00:58:14.780406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-27 00:58:14.780414 | orchestrator | 2026-02-27 00:58:14.780418 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-02-27 00:58:14.780422 | orchestrator | Friday 27 February 2026 00:57:13 +0000 (0:00:02.997) 0:05:59.987 ******* 2026-02-27 00:58:14.780426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-27 00:58:14.780431 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.780437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-27 00:58:14.780442 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.780446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-27 00:58:14.780450 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.780454 | orchestrator | 2026-02-27 00:58:14.780461 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-02-27 00:58:14.780468 | orchestrator | Friday 27 February 2026 00:57:14 +0000 (0:00:00.773) 0:06:00.761 ******* 2026-02-27 00:58:14.780473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-27 00:58:14.780477 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.780481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-27 00:58:14.780485 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.780489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-27 00:58:14.780494 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.780498 | orchestrator | 2026-02-27 00:58:14.780502 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-02-27 00:58:14.780506 | orchestrator | Friday 27 February 2026 00:57:14 +0000 (0:00:00.685) 0:06:01.446 ******* 2026-02-27 00:58:14.780510 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.780514 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.780518 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.780522 | orchestrator | 2026-02-27 00:58:14.780526 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-02-27 00:58:14.780531 | orchestrator | Friday 27 February 2026 00:57:15 +0000 (0:00:00.486) 0:06:01.932 ******* 2026-02-27 00:58:14.780538 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.780546 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.780554 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.780561 | orchestrator | 2026-02-27 00:58:14.780567 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-02-27 00:58:14.780574 | orchestrator | Friday 27 February 2026 00:57:16 +0000 (0:00:01.414) 0:06:03.347 ******* 2026-02-27 00:58:14.780580 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 00:58:14.780588 | orchestrator | 2026-02-27 00:58:14.780592 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-02-27 00:58:14.780596 | orchestrator | Friday 27 February 2026 00:57:18 +0000 (0:00:01.964) 0:06:05.311 ******* 2026-02-27 00:58:14.780600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-27 00:58:14.780608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-27 00:58:14.780619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-27 00:58:14.780624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-27 00:58:14.780629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-27 00:58:14.780636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-27 00:58:14.780643 | orchestrator | 2026-02-27 00:58:14.780647 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-02-27 00:58:14.780651 | orchestrator | Friday 27 February 2026 00:57:25 +0000 (0:00:06.747) 0:06:12.059 ******* 2026-02-27 00:58:14.780657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-27 00:58:14.780662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-27 00:58:14.780666 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.780670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-27 00:58:14.780677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-27 00:58:14.780684 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.780691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-27 00:58:14.780696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-27 00:58:14.780700 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.780704 | orchestrator | 2026-02-27 00:58:14.780708 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-02-27 00:58:14.780713 | orchestrator | Friday 27 February 2026 00:57:26 +0000 (0:00:00.837) 0:06:12.897 ******* 2026-02-27 00:58:14.780717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-27 00:58:14.780721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-27 00:58:14.780725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-27 00:58:14.780730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-27 00:58:14.780734 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.780738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-27 00:58:14.780742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-27 00:58:14.780746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-27 00:58:14.780755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-27 00:58:14.780760 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.780764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-27 00:58:14.780768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-27 00:58:14.780772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-27 00:58:14.780776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-27 00:58:14.780794 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.780798 | orchestrator | 2026-02-27 00:58:14.780802 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-02-27 00:58:14.780806 | orchestrator | Friday 27 February 2026 00:57:28 +0000 (0:00:01.954) 0:06:14.852 ******* 2026-02-27 00:58:14.780811 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:58:14.780815 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:58:14.780819 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:58:14.780823 | orchestrator | 2026-02-27 00:58:14.780827 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-02-27 00:58:14.780831 | orchestrator | Friday 27 February 2026 00:57:29 +0000 (0:00:01.378) 0:06:16.230 ******* 2026-02-27 00:58:14.780835 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:58:14.780839 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:58:14.780844 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:58:14.780848 | orchestrator | 2026-02-27 00:58:14.780866 | orchestrator | TASK [include_role : swift] **************************************************** 2026-02-27 00:58:14.780871 | orchestrator | Friday 27 February 2026 00:57:31 +0000 (0:00:02.430) 0:06:18.661 ******* 2026-02-27 00:58:14.780875 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.780879 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.780883 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.780887 | orchestrator | 2026-02-27 00:58:14.780891 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-02-27 00:58:14.780895 | orchestrator | Friday 27 February 2026 00:57:32 +0000 (0:00:00.410) 0:06:19.071 ******* 2026-02-27 00:58:14.780899 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.780903 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.780907 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.780911 | orchestrator | 2026-02-27 00:58:14.780915 | orchestrator | TASK [include_role : trove] **************************************************** 2026-02-27 00:58:14.780920 | orchestrator | Friday 27 February 2026 00:57:32 +0000 (0:00:00.321) 0:06:19.393 ******* 2026-02-27 00:58:14.780924 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.780928 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.780932 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.780936 | orchestrator | 2026-02-27 00:58:14.780940 | orchestrator | TASK [include_role : venus] **************************************************** 2026-02-27 00:58:14.780944 | orchestrator | Friday 27 February 2026 00:57:33 +0000 (0:00:00.721) 0:06:20.115 ******* 2026-02-27 00:58:14.780951 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.780955 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.780959 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.780963 | orchestrator | 2026-02-27 00:58:14.780967 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-02-27 00:58:14.780971 | orchestrator | Friday 27 February 2026 00:57:33 +0000 (0:00:00.360) 0:06:20.475 ******* 2026-02-27 00:58:14.780975 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.780979 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.780983 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.780987 | orchestrator | 2026-02-27 00:58:14.780991 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-02-27 00:58:14.780995 | orchestrator | Friday 27 February 2026 00:57:34 +0000 (0:00:00.362) 0:06:20.838 ******* 2026-02-27 00:58:14.780999 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.781003 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.781007 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.781011 | orchestrator | 2026-02-27 00:58:14.781015 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-02-27 00:58:14.781020 | orchestrator | Friday 27 February 2026 00:57:35 +0000 (0:00:00.924) 0:06:21.763 ******* 2026-02-27 00:58:14.781024 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:58:14.781028 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:58:14.781032 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:58:14.781036 | orchestrator | 2026-02-27 00:58:14.781040 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-02-27 00:58:14.781044 | orchestrator | Friday 27 February 2026 00:57:35 +0000 (0:00:00.819) 0:06:22.582 ******* 2026-02-27 00:58:14.781048 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:58:14.781052 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:58:14.781056 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:58:14.781060 | orchestrator | 2026-02-27 00:58:14.781064 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-02-27 00:58:14.781068 | orchestrator | Friday 27 February 2026 00:57:36 +0000 (0:00:00.378) 0:06:22.961 ******* 2026-02-27 00:58:14.781073 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:58:14.781076 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:58:14.781081 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:58:14.781085 | orchestrator | 2026-02-27 00:58:14.781091 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-02-27 00:58:14.781095 | orchestrator | Friday 27 February 2026 00:57:37 +0000 (0:00:00.987) 0:06:23.948 ******* 2026-02-27 00:58:14.781099 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:58:14.781103 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:58:14.781107 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:58:14.781111 | orchestrator | 2026-02-27 00:58:14.781115 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-02-27 00:58:14.781119 | orchestrator | Friday 27 February 2026 00:57:38 +0000 (0:00:01.380) 0:06:25.328 ******* 2026-02-27 00:58:14.781123 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:58:14.781127 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:58:14.781131 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:58:14.781135 | orchestrator | 2026-02-27 00:58:14.781139 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-02-27 00:58:14.781144 | orchestrator | Friday 27 February 2026 00:57:39 +0000 (0:00:01.063) 0:06:26.392 ******* 2026-02-27 00:58:14.781148 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:58:14.781152 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:58:14.781156 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:58:14.781160 | orchestrator | 2026-02-27 00:58:14.781164 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-02-27 00:58:14.781168 | orchestrator | Friday 27 February 2026 00:57:44 +0000 (0:00:04.940) 0:06:31.333 ******* 2026-02-27 00:58:14.781172 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:58:14.781176 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:58:14.781183 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:58:14.781187 | orchestrator | 2026-02-27 00:58:14.781191 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-02-27 00:58:14.781195 | orchestrator | Friday 27 February 2026 00:57:47 +0000 (0:00:02.856) 0:06:34.190 ******* 2026-02-27 00:58:14.781202 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:58:14.781206 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:58:14.781210 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:58:14.781214 | orchestrator | 2026-02-27 00:58:14.781218 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-02-27 00:58:14.781222 | orchestrator | Friday 27 February 2026 00:57:57 +0000 (0:00:09.664) 0:06:43.854 ******* 2026-02-27 00:58:14.781226 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:58:14.781230 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:58:14.781234 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:58:14.781238 | orchestrator | 2026-02-27 00:58:14.781242 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-02-27 00:58:14.781247 | orchestrator | Friday 27 February 2026 00:58:00 +0000 (0:00:03.837) 0:06:47.692 ******* 2026-02-27 00:58:14.781251 | orchestrator | changed: [testbed-node-0] 2026-02-27 00:58:14.781255 | orchestrator | changed: [testbed-node-2] 2026-02-27 00:58:14.781259 | orchestrator | changed: [testbed-node-1] 2026-02-27 00:58:14.781263 | orchestrator | 2026-02-27 00:58:14.781267 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-02-27 00:58:14.781271 | orchestrator | Friday 27 February 2026 00:58:05 +0000 (0:00:04.607) 0:06:52.299 ******* 2026-02-27 00:58:14.781275 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.781279 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.781283 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.781287 | orchestrator | 2026-02-27 00:58:14.781291 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-02-27 00:58:14.781295 | orchestrator | Friday 27 February 2026 00:58:05 +0000 (0:00:00.337) 0:06:52.637 ******* 2026-02-27 00:58:14.781299 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.781303 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.781307 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.781311 | orchestrator | 2026-02-27 00:58:14.781315 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-02-27 00:58:14.781319 | orchestrator | Friday 27 February 2026 00:58:06 +0000 (0:00:00.542) 0:06:53.179 ******* 2026-02-27 00:58:14.781324 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.781328 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.781332 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.781336 | orchestrator | 2026-02-27 00:58:14.781340 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-02-27 00:58:14.781344 | orchestrator | Friday 27 February 2026 00:58:06 +0000 (0:00:00.318) 0:06:53.498 ******* 2026-02-27 00:58:14.781348 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.781352 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.781356 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.781360 | orchestrator | 2026-02-27 00:58:14.781364 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-02-27 00:58:14.781368 | orchestrator | Friday 27 February 2026 00:58:07 +0000 (0:00:00.336) 0:06:53.834 ******* 2026-02-27 00:58:14.781372 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.781376 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.781380 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.781384 | orchestrator | 2026-02-27 00:58:14.781388 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-02-27 00:58:14.781392 | orchestrator | Friday 27 February 2026 00:58:07 +0000 (0:00:00.422) 0:06:54.256 ******* 2026-02-27 00:58:14.781397 | orchestrator | skipping: [testbed-node-0] 2026-02-27 00:58:14.781401 | orchestrator | skipping: [testbed-node-1] 2026-02-27 00:58:14.781405 | orchestrator | skipping: [testbed-node-2] 2026-02-27 00:58:14.781412 | orchestrator | 2026-02-27 00:58:14.781416 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-02-27 00:58:14.781420 | orchestrator | Friday 27 February 2026 00:58:07 +0000 (0:00:00.354) 0:06:54.611 ******* 2026-02-27 00:58:14.781424 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:58:14.781428 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:58:14.781432 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:58:14.781436 | orchestrator | 2026-02-27 00:58:14.781440 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-02-27 00:58:14.781444 | orchestrator | Friday 27 February 2026 00:58:12 +0000 (0:00:04.993) 0:06:59.604 ******* 2026-02-27 00:58:14.781448 | orchestrator | ok: [testbed-node-0] 2026-02-27 00:58:14.781452 | orchestrator | ok: [testbed-node-1] 2026-02-27 00:58:14.781456 | orchestrator | ok: [testbed-node-2] 2026-02-27 00:58:14.781460 | orchestrator | 2026-02-27 00:58:14.781465 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 00:58:14.781471 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-27 00:58:14.781476 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-27 00:58:14.781480 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-27 00:58:14.781484 | orchestrator | 2026-02-27 00:58:14.781488 | orchestrator | 2026-02-27 00:58:14.781492 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 00:58:14.781496 | orchestrator | Friday 27 February 2026 00:58:13 +0000 (0:00:00.884) 0:07:00.489 ******* 2026-02-27 00:58:14.781500 | orchestrator | =============================================================================== 2026-02-27 00:58:14.781504 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 9.66s 2026-02-27 00:58:14.781509 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 7.63s 2026-02-27 00:58:14.781513 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.75s 2026-02-27 00:58:14.781517 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 6.17s 2026-02-27 00:58:14.781521 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 6.06s 2026-02-27 00:58:14.781525 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 5.40s 2026-02-27 00:58:14.781531 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.28s 2026-02-27 00:58:14.781535 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.23s 2026-02-27 00:58:14.781539 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 5.07s 2026-02-27 00:58:14.781543 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 5.03s 2026-02-27 00:58:14.781547 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 4.99s 2026-02-27 00:58:14.781551 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.94s 2026-02-27 00:58:14.781555 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.90s 2026-02-27 00:58:14.781559 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 4.83s 2026-02-27 00:58:14.781563 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.63s 2026-02-27 00:58:14.781568 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.61s 2026-02-27 00:58:14.781572 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.50s 2026-02-27 00:58:14.781576 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 4.46s 2026-02-27 00:58:14.781580 | orchestrator | loadbalancer : Ensuring config directories exist ------------------------ 4.09s 2026-02-27 00:58:14.781584 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 3.92s 2026-02-27 00:58:14.781591 | orchestrator | 2026-02-27 00:58:14 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:58:14.781595 | orchestrator | 2026-02-27 00:58:14 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:58:17.822712 | orchestrator | 2026-02-27 00:58:17 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 00:58:17.824782 | orchestrator | 2026-02-27 00:58:17 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:58:17.827060 | orchestrator | 2026-02-27 00:58:17 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 00:58:17.827105 | orchestrator | 2026-02-27 00:58:17 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:58:20.872533 | orchestrator | 2026-02-27 00:58:20 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 00:58:20.873991 | orchestrator | 2026-02-27 00:58:20 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:58:20.875469 | orchestrator | 2026-02-27 00:58:20 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 00:58:20.875516 | orchestrator | 2026-02-27 00:58:20 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:58:23.909637 | orchestrator | 2026-02-27 00:58:23 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 00:58:23.910585 | orchestrator | 2026-02-27 00:58:23 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:58:23.911428 | orchestrator | 2026-02-27 00:58:23 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 00:58:23.911462 | orchestrator | 2026-02-27 00:58:23 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:58:26.954746 | orchestrator | 2026-02-27 00:58:26 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 00:58:26.956351 | orchestrator | 2026-02-27 00:58:26 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:58:26.957379 | orchestrator | 2026-02-27 00:58:26 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 00:58:26.957449 | orchestrator | 2026-02-27 00:58:26 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:58:29.998440 | orchestrator | 2026-02-27 00:58:29 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 00:58:30.001774 | orchestrator | 2026-02-27 00:58:30 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:58:30.002465 | orchestrator | 2026-02-27 00:58:30 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 00:58:30.002517 | orchestrator | 2026-02-27 00:58:30 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:58:33.097211 | orchestrator | 2026-02-27 00:58:33 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 00:58:33.097645 | orchestrator | 2026-02-27 00:58:33 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:58:33.099905 | orchestrator | 2026-02-27 00:58:33 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 00:58:33.100424 | orchestrator | 2026-02-27 00:58:33 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:58:36.143199 | orchestrator | 2026-02-27 00:58:36 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 00:58:36.143513 | orchestrator | 2026-02-27 00:58:36 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:58:36.146433 | orchestrator | 2026-02-27 00:58:36 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 00:58:36.146520 | orchestrator | 2026-02-27 00:58:36 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:58:39.177948 | orchestrator | 2026-02-27 00:58:39 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 00:58:39.179373 | orchestrator | 2026-02-27 00:58:39 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:58:39.180210 | orchestrator | 2026-02-27 00:58:39 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 00:58:39.180240 | orchestrator | 2026-02-27 00:58:39 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:58:42.219630 | orchestrator | 2026-02-27 00:58:42 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 00:58:42.225747 | orchestrator | 2026-02-27 00:58:42 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:58:42.230412 | orchestrator | 2026-02-27 00:58:42 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 00:58:42.230523 | orchestrator | 2026-02-27 00:58:42 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:58:45.261620 | orchestrator | 2026-02-27 00:58:45 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 00:58:45.263844 | orchestrator | 2026-02-27 00:58:45 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:58:45.263964 | orchestrator | 2026-02-27 00:58:45 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 00:58:45.263975 | orchestrator | 2026-02-27 00:58:45 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:58:48.304052 | orchestrator | 2026-02-27 00:58:48 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 00:58:48.305980 | orchestrator | 2026-02-27 00:58:48 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:58:48.309579 | orchestrator | 2026-02-27 00:58:48 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 00:58:48.309653 | orchestrator | 2026-02-27 00:58:48 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:58:51.360061 | orchestrator | 2026-02-27 00:58:51 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 00:58:51.360485 | orchestrator | 2026-02-27 00:58:51 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:58:51.361095 | orchestrator | 2026-02-27 00:58:51 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 00:58:51.361126 | orchestrator | 2026-02-27 00:58:51 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:58:54.424514 | orchestrator | 2026-02-27 00:58:54 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 00:58:54.425078 | orchestrator | 2026-02-27 00:58:54 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:58:54.425883 | orchestrator | 2026-02-27 00:58:54 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 00:58:54.425958 | orchestrator | 2026-02-27 00:58:54 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:58:57.459476 | orchestrator | 2026-02-27 00:58:57 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 00:58:57.461389 | orchestrator | 2026-02-27 00:58:57 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:58:57.461637 | orchestrator | 2026-02-27 00:58:57 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 00:58:57.462156 | orchestrator | 2026-02-27 00:58:57 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:59:00.500603 | orchestrator | 2026-02-27 00:59:00 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 00:59:00.501518 | orchestrator | 2026-02-27 00:59:00 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:59:00.502503 | orchestrator | 2026-02-27 00:59:00 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 00:59:00.502522 | orchestrator | 2026-02-27 00:59:00 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:59:03.549443 | orchestrator | 2026-02-27 00:59:03 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 00:59:03.550739 | orchestrator | 2026-02-27 00:59:03 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:59:03.552388 | orchestrator | 2026-02-27 00:59:03 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 00:59:03.553298 | orchestrator | 2026-02-27 00:59:03 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:59:06.602093 | orchestrator | 2026-02-27 00:59:06 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 00:59:06.602800 | orchestrator | 2026-02-27 00:59:06 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:59:06.604478 | orchestrator | 2026-02-27 00:59:06 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 00:59:06.604505 | orchestrator | 2026-02-27 00:59:06 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:59:09.657004 | orchestrator | 2026-02-27 00:59:09 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 00:59:09.660090 | orchestrator | 2026-02-27 00:59:09 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:59:09.663277 | orchestrator | 2026-02-27 00:59:09 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 00:59:09.663352 | orchestrator | 2026-02-27 00:59:09 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:59:12.710127 | orchestrator | 2026-02-27 00:59:12 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 00:59:12.710705 | orchestrator | 2026-02-27 00:59:12 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:59:12.710737 | orchestrator | 2026-02-27 00:59:12 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 00:59:12.710751 | orchestrator | 2026-02-27 00:59:12 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:59:15.755324 | orchestrator | 2026-02-27 00:59:15 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 00:59:15.756092 | orchestrator | 2026-02-27 00:59:15 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:59:15.758306 | orchestrator | 2026-02-27 00:59:15 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 00:59:15.758381 | orchestrator | 2026-02-27 00:59:15 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:59:18.812065 | orchestrator | 2026-02-27 00:59:18 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 00:59:18.813432 | orchestrator | 2026-02-27 00:59:18 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:59:18.815513 | orchestrator | 2026-02-27 00:59:18 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 00:59:18.815575 | orchestrator | 2026-02-27 00:59:18 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:59:21.853513 | orchestrator | 2026-02-27 00:59:21 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 00:59:21.854872 | orchestrator | 2026-02-27 00:59:21 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:59:21.859826 | orchestrator | 2026-02-27 00:59:21 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 00:59:21.859868 | orchestrator | 2026-02-27 00:59:21 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:59:24.915504 | orchestrator | 2026-02-27 00:59:24 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 00:59:24.917496 | orchestrator | 2026-02-27 00:59:24 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:59:24.920278 | orchestrator | 2026-02-27 00:59:24 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 00:59:24.921111 | orchestrator | 2026-02-27 00:59:24 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:59:27.976809 | orchestrator | 2026-02-27 00:59:27 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 00:59:27.978699 | orchestrator | 2026-02-27 00:59:27 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:59:27.981112 | orchestrator | 2026-02-27 00:59:27 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 00:59:27.981317 | orchestrator | 2026-02-27 00:59:27 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:59:31.036081 | orchestrator | 2026-02-27 00:59:31 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 00:59:31.037341 | orchestrator | 2026-02-27 00:59:31 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:59:31.038139 | orchestrator | 2026-02-27 00:59:31 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 00:59:31.038187 | orchestrator | 2026-02-27 00:59:31 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:59:34.091744 | orchestrator | 2026-02-27 00:59:34 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 00:59:34.093415 | orchestrator | 2026-02-27 00:59:34 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:59:34.095348 | orchestrator | 2026-02-27 00:59:34 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 00:59:34.095388 | orchestrator | 2026-02-27 00:59:34 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:59:37.147147 | orchestrator | 2026-02-27 00:59:37 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 00:59:37.147711 | orchestrator | 2026-02-27 00:59:37 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:59:37.148863 | orchestrator | 2026-02-27 00:59:37 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 00:59:37.149760 | orchestrator | 2026-02-27 00:59:37 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:59:40.186996 | orchestrator | 2026-02-27 00:59:40 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 00:59:40.188675 | orchestrator | 2026-02-27 00:59:40 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:59:40.190399 | orchestrator | 2026-02-27 00:59:40 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 00:59:40.190442 | orchestrator | 2026-02-27 00:59:40 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:59:43.239103 | orchestrator | 2026-02-27 00:59:43 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 00:59:43.241978 | orchestrator | 2026-02-27 00:59:43 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:59:43.243753 | orchestrator | 2026-02-27 00:59:43 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 00:59:43.244204 | orchestrator | 2026-02-27 00:59:43 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:59:46.326377 | orchestrator | 2026-02-27 00:59:46 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 00:59:46.328132 | orchestrator | 2026-02-27 00:59:46 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:59:46.329872 | orchestrator | 2026-02-27 00:59:46 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 00:59:46.329937 | orchestrator | 2026-02-27 00:59:46 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:59:49.374345 | orchestrator | 2026-02-27 00:59:49 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 00:59:49.375146 | orchestrator | 2026-02-27 00:59:49 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:59:49.377214 | orchestrator | 2026-02-27 00:59:49 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 00:59:49.377264 | orchestrator | 2026-02-27 00:59:49 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:59:52.431406 | orchestrator | 2026-02-27 00:59:52 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 00:59:52.432422 | orchestrator | 2026-02-27 00:59:52 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:59:52.433872 | orchestrator | 2026-02-27 00:59:52 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 00:59:52.433903 | orchestrator | 2026-02-27 00:59:52 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:59:55.485738 | orchestrator | 2026-02-27 00:59:55 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 00:59:55.487768 | orchestrator | 2026-02-27 00:59:55 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:59:55.489479 | orchestrator | 2026-02-27 00:59:55 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 00:59:55.489631 | orchestrator | 2026-02-27 00:59:55 | INFO  | Wait 1 second(s) until the next check 2026-02-27 00:59:58.551561 | orchestrator | 2026-02-27 00:59:58 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 00:59:58.556715 | orchestrator | 2026-02-27 00:59:58 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 00:59:58.559587 | orchestrator | 2026-02-27 00:59:58 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 00:59:58.559634 | orchestrator | 2026-02-27 00:59:58 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:00:01.614430 | orchestrator | 2026-02-27 01:00:01 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 01:00:01.616641 | orchestrator | 2026-02-27 01:00:01 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 01:00:01.618367 | orchestrator | 2026-02-27 01:00:01 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 01:00:01.618432 | orchestrator | 2026-02-27 01:00:01 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:00:04.658179 | orchestrator | 2026-02-27 01:00:04 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 01:00:04.659310 | orchestrator | 2026-02-27 01:00:04 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 01:00:04.662292 | orchestrator | 2026-02-27 01:00:04 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 01:00:04.662366 | orchestrator | 2026-02-27 01:00:04 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:00:07.705830 | orchestrator | 2026-02-27 01:00:07 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 01:00:07.708198 | orchestrator | 2026-02-27 01:00:07 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 01:00:07.712106 | orchestrator | 2026-02-27 01:00:07 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 01:00:07.712162 | orchestrator | 2026-02-27 01:00:07 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:00:10.759665 | orchestrator | 2026-02-27 01:00:10 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 01:00:10.762272 | orchestrator | 2026-02-27 01:00:10 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 01:00:10.766264 | orchestrator | 2026-02-27 01:00:10 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 01:00:10.766312 | orchestrator | 2026-02-27 01:00:10 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:00:13.816683 | orchestrator | 2026-02-27 01:00:13 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 01:00:13.818196 | orchestrator | 2026-02-27 01:00:13 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 01:00:13.820773 | orchestrator | 2026-02-27 01:00:13 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 01:00:13.820833 | orchestrator | 2026-02-27 01:00:13 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:00:16.876767 | orchestrator | 2026-02-27 01:00:16 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 01:00:16.878423 | orchestrator | 2026-02-27 01:00:16 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 01:00:16.880175 | orchestrator | 2026-02-27 01:00:16 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 01:00:16.880558 | orchestrator | 2026-02-27 01:00:16 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:00:19.937692 | orchestrator | 2026-02-27 01:00:19 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 01:00:19.941370 | orchestrator | 2026-02-27 01:00:19 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 01:00:19.945456 | orchestrator | 2026-02-27 01:00:19 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 01:00:19.945512 | orchestrator | 2026-02-27 01:00:19 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:00:22.994495 | orchestrator | 2026-02-27 01:00:22 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 01:00:22.997860 | orchestrator | 2026-02-27 01:00:22 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 01:00:23.000681 | orchestrator | 2026-02-27 01:00:23 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 01:00:23.000762 | orchestrator | 2026-02-27 01:00:23 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:00:26.062089 | orchestrator | 2026-02-27 01:00:26 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 01:00:26.064296 | orchestrator | 2026-02-27 01:00:26 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 01:00:26.068261 | orchestrator | 2026-02-27 01:00:26 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 01:00:26.070195 | orchestrator | 2026-02-27 01:00:26 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:00:29.107874 | orchestrator | 2026-02-27 01:00:29 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 01:00:29.110164 | orchestrator | 2026-02-27 01:00:29 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 01:00:29.113743 | orchestrator | 2026-02-27 01:00:29 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 01:00:29.113809 | orchestrator | 2026-02-27 01:00:29 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:00:32.158758 | orchestrator | 2026-02-27 01:00:32 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 01:00:32.159529 | orchestrator | 2026-02-27 01:00:32 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 01:00:32.160830 | orchestrator | 2026-02-27 01:00:32 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 01:00:32.160906 | orchestrator | 2026-02-27 01:00:32 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:00:35.211249 | orchestrator | 2026-02-27 01:00:35 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 01:00:35.215836 | orchestrator | 2026-02-27 01:00:35 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 01:00:35.215916 | orchestrator | 2026-02-27 01:00:35 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 01:00:35.215929 | orchestrator | 2026-02-27 01:00:35 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:00:38.263154 | orchestrator | 2026-02-27 01:00:38 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 01:00:38.264439 | orchestrator | 2026-02-27 01:00:38 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state STARTED 2026-02-27 01:00:38.266207 | orchestrator | 2026-02-27 01:00:38 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 01:00:38.266245 | orchestrator | 2026-02-27 01:00:38 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:00:41.304324 | orchestrator | 2026-02-27 01:00:41 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 01:00:41.313357 | orchestrator | 2026-02-27 01:00:41 | INFO  | Task 929e3c69-8775-4ef6-8f45-30290b2ec5d9 is in state SUCCESS 2026-02-27 01:00:41.315902 | orchestrator | 2026-02-27 01:00:41.315961 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-27 01:00:41.315977 | orchestrator | 2.16.14 2026-02-27 01:00:41.315986 | orchestrator | 2026-02-27 01:00:41.315993 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-02-27 01:00:41.316002 | orchestrator | 2026-02-27 01:00:41.316009 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-27 01:00:41.316017 | orchestrator | Friday 27 February 2026 00:48:35 +0000 (0:00:00.958) 0:00:00.958 ******* 2026-02-27 01:00:41.316025 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:00:41.316034 | orchestrator | 2026-02-27 01:00:41.316040 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-27 01:00:41.316046 | orchestrator | Friday 27 February 2026 00:48:37 +0000 (0:00:01.621) 0:00:02.580 ******* 2026-02-27 01:00:41.316053 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.316060 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.316088 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.316096 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.316103 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.316109 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.316115 | orchestrator | 2026-02-27 01:00:41.316122 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-27 01:00:41.316128 | orchestrator | Friday 27 February 2026 00:48:39 +0000 (0:00:02.217) 0:00:04.797 ******* 2026-02-27 01:00:41.316135 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.316141 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.316148 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.316154 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.316162 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.316184 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.316190 | orchestrator | 2026-02-27 01:00:41.316196 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-27 01:00:41.316202 | orchestrator | Friday 27 February 2026 00:48:40 +0000 (0:00:01.406) 0:00:06.204 ******* 2026-02-27 01:00:41.316209 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.316216 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.316222 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.316229 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.316236 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.316255 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.316263 | orchestrator | 2026-02-27 01:00:41.316269 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-27 01:00:41.316276 | orchestrator | Friday 27 February 2026 00:48:42 +0000 (0:00:01.270) 0:00:07.475 ******* 2026-02-27 01:00:41.316282 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.316288 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.316294 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.316300 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.316307 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.316313 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.316319 | orchestrator | 2026-02-27 01:00:41.316326 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-27 01:00:41.316333 | orchestrator | Friday 27 February 2026 00:48:43 +0000 (0:00:01.284) 0:00:08.759 ******* 2026-02-27 01:00:41.316339 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.316346 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.316352 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.316358 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.316365 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.316371 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.316378 | orchestrator | 2026-02-27 01:00:41.316385 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-27 01:00:41.316392 | orchestrator | Friday 27 February 2026 00:48:44 +0000 (0:00:00.907) 0:00:09.667 ******* 2026-02-27 01:00:41.316398 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.316404 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.316410 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.316416 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.316422 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.316427 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.316433 | orchestrator | 2026-02-27 01:00:41.316439 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-27 01:00:41.316445 | orchestrator | Friday 27 February 2026 00:48:45 +0000 (0:00:01.029) 0:00:10.696 ******* 2026-02-27 01:00:41.316452 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.316460 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.316467 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.316473 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.316479 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.316485 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.316492 | orchestrator | 2026-02-27 01:00:41.316499 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-27 01:00:41.316514 | orchestrator | Friday 27 February 2026 00:48:46 +0000 (0:00:00.906) 0:00:11.603 ******* 2026-02-27 01:00:41.316520 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.316526 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.316533 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.316540 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.316548 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.316555 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.316562 | orchestrator | 2026-02-27 01:00:41.316569 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-27 01:00:41.316576 | orchestrator | Friday 27 February 2026 00:48:47 +0000 (0:00:00.986) 0:00:12.589 ******* 2026-02-27 01:00:41.316582 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-27 01:00:41.316589 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-27 01:00:41.316596 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-27 01:00:41.316603 | orchestrator | 2026-02-27 01:00:41.316610 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-27 01:00:41.316616 | orchestrator | Friday 27 February 2026 00:48:48 +0000 (0:00:00.812) 0:00:13.401 ******* 2026-02-27 01:00:41.316623 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.316629 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.316636 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.316656 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.316663 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.316669 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.316676 | orchestrator | 2026-02-27 01:00:41.316683 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-27 01:00:41.316690 | orchestrator | Friday 27 February 2026 00:48:49 +0000 (0:00:01.213) 0:00:14.615 ******* 2026-02-27 01:00:41.316697 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-27 01:00:41.316703 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-27 01:00:41.316709 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-27 01:00:41.316716 | orchestrator | 2026-02-27 01:00:41.316721 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-27 01:00:41.316727 | orchestrator | Friday 27 February 2026 00:48:52 +0000 (0:00:02.954) 0:00:17.570 ******* 2026-02-27 01:00:41.316733 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-27 01:00:41.316740 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-27 01:00:41.316747 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-27 01:00:41.316754 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.316760 | orchestrator | 2026-02-27 01:00:41.316766 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-27 01:00:41.316773 | orchestrator | Friday 27 February 2026 00:48:52 +0000 (0:00:00.704) 0:00:18.274 ******* 2026-02-27 01:00:41.316781 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.316792 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.316804 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.316811 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.316818 | orchestrator | 2026-02-27 01:00:41.316831 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-27 01:00:41.316838 | orchestrator | Friday 27 February 2026 00:48:53 +0000 (0:00:00.866) 0:00:19.140 ******* 2026-02-27 01:00:41.316846 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.316855 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.316863 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.316869 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.316876 | orchestrator | 2026-02-27 01:00:41.316883 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-27 01:00:41.316889 | orchestrator | Friday 27 February 2026 00:48:54 +0000 (0:00:00.676) 0:00:19.817 ******* 2026-02-27 01:00:41.316903 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-27 00:48:50.058528', 'end': '2026-02-27 00:48:50.161723', 'delta': '0:00:00.103195', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.316911 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-27 00:48:51.095302', 'end': '2026-02-27 00:48:51.192171', 'delta': '0:00:00.096869', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.316921 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-27 00:48:51.757243', 'end': '2026-02-27 00:48:51.867316', 'delta': '0:00:00.110073', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.316933 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.316940 | orchestrator | 2026-02-27 01:00:41.316947 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-27 01:00:41.316953 | orchestrator | Friday 27 February 2026 00:48:54 +0000 (0:00:00.185) 0:00:20.003 ******* 2026-02-27 01:00:41.316960 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.316967 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.316973 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.316980 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.316986 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.316993 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.316999 | orchestrator | 2026-02-27 01:00:41.317006 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-27 01:00:41.317012 | orchestrator | Friday 27 February 2026 00:48:56 +0000 (0:00:01.973) 0:00:21.976 ******* 2026-02-27 01:00:41.317019 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-27 01:00:41.317025 | orchestrator | 2026-02-27 01:00:41.317031 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-27 01:00:41.317038 | orchestrator | Friday 27 February 2026 00:48:57 +0000 (0:00:00.697) 0:00:22.674 ******* 2026-02-27 01:00:41.317044 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.317050 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.317057 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.317063 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.317069 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.317076 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.317082 | orchestrator | 2026-02-27 01:00:41.317089 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-27 01:00:41.317095 | orchestrator | Friday 27 February 2026 00:48:58 +0000 (0:00:01.702) 0:00:24.376 ******* 2026-02-27 01:00:41.317102 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.317108 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.317114 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.317121 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.317127 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.317134 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.317139 | orchestrator | 2026-02-27 01:00:41.317146 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-27 01:00:41.317153 | orchestrator | Friday 27 February 2026 00:49:01 +0000 (0:00:02.871) 0:00:27.248 ******* 2026-02-27 01:00:41.317159 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.317181 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.317187 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.317192 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.317200 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.317205 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.317211 | orchestrator | 2026-02-27 01:00:41.317217 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-27 01:00:41.317223 | orchestrator | Friday 27 February 2026 00:49:04 +0000 (0:00:03.080) 0:00:30.330 ******* 2026-02-27 01:00:41.317230 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.317236 | orchestrator | 2026-02-27 01:00:41.317243 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-27 01:00:41.317250 | orchestrator | Friday 27 February 2026 00:49:05 +0000 (0:00:00.563) 0:00:30.894 ******* 2026-02-27 01:00:41.317257 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.317263 | orchestrator | 2026-02-27 01:00:41.317269 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-27 01:00:41.317276 | orchestrator | Friday 27 February 2026 00:49:06 +0000 (0:00:00.675) 0:00:31.569 ******* 2026-02-27 01:00:41.317282 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.317288 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.317301 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.317312 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.317319 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.317325 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.317331 | orchestrator | 2026-02-27 01:00:41.317338 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-27 01:00:41.317344 | orchestrator | Friday 27 February 2026 00:49:07 +0000 (0:00:01.680) 0:00:33.250 ******* 2026-02-27 01:00:41.317351 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.317357 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.317364 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.317370 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.317376 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.317382 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.317389 | orchestrator | 2026-02-27 01:00:41.317395 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-27 01:00:41.317402 | orchestrator | Friday 27 February 2026 00:49:10 +0000 (0:00:02.888) 0:00:36.138 ******* 2026-02-27 01:00:41.317408 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.317414 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.317420 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.317426 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.317432 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.317439 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.317446 | orchestrator | 2026-02-27 01:00:41.317452 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-27 01:00:41.317458 | orchestrator | Friday 27 February 2026 00:49:11 +0000 (0:00:01.156) 0:00:37.295 ******* 2026-02-27 01:00:41.317464 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.317470 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.317477 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.317483 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.317490 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.317496 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.317503 | orchestrator | 2026-02-27 01:00:41.317509 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-27 01:00:41.317514 | orchestrator | Friday 27 February 2026 00:49:13 +0000 (0:00:01.820) 0:00:39.115 ******* 2026-02-27 01:00:41.317521 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.317527 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.317533 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.317540 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.317550 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.317557 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.317564 | orchestrator | 2026-02-27 01:00:41.317570 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-27 01:00:41.317577 | orchestrator | Friday 27 February 2026 00:49:14 +0000 (0:00:01.013) 0:00:40.129 ******* 2026-02-27 01:00:41.317583 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.317589 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.317595 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.317602 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.317608 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.317615 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.317621 | orchestrator | 2026-02-27 01:00:41.317628 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-27 01:00:41.317634 | orchestrator | Friday 27 February 2026 00:49:16 +0000 (0:00:01.543) 0:00:41.673 ******* 2026-02-27 01:00:41.317640 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.317647 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.317653 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.317659 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.317665 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.317676 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.317683 | orchestrator | 2026-02-27 01:00:41.317690 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-27 01:00:41.317696 | orchestrator | Friday 27 February 2026 00:49:17 +0000 (0:00:01.066) 0:00:42.739 ******* 2026-02-27 01:00:41.317704 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c5e6c545--43c0--5a5e--9b6e--24e5d5043e2a-osd--block--c5e6c545--43c0--5a5e--9b6e--24e5d5043e2a', 'dm-uuid-LVM-ktZNB2qrs3DaCnLkAdNHrqYVG23HKb1FGHO1W2U1zR2CbXChmoBj0ctfCoqUzjKf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.317713 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--aa250c28--8715--5ad9--8f6a--4b8a4568e8d3-osd--block--aa250c28--8715--5ad9--8f6a--4b8a4568e8d3', 'dm-uuid-LVM-ZkL6ONrrTJ7thuRkFAXmCWJ98Giu8rzf6AyCY1QlpDnyMYhjrremnq2sgAaYdddg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.317725 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--15e091ae--77f4--5dd5--92b2--2aa74778b61d-osd--block--15e091ae--77f4--5dd5--92b2--2aa74778b61d', 'dm-uuid-LVM-qJU288vwWpkc3KXMmYUCJORUt3aDMziKdcrQEt5vLA8Hjbzwqjl8UH3NpNbOBh11'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.317733 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.317740 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5630d52f--55a8--52f3--8c7d--90d730eab2c2-osd--block--5630d52f--55a8--52f3--8c7d--90d730eab2c2', 'dm-uuid-LVM-E17jWAJP6Me7aqZ4Q8UClyfqzp0zu2zwBObKfGSwewlrOjqJGlCTZm1c7oSX94jh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.317751 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.317757 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--91c1f24e--fd77--555b--b1fb--5152ae0ce974-osd--block--91c1f24e--fd77--555b--b1fb--5152ae0ce974', 'dm-uuid-LVM-XRxvjDzFqVbn17VReU4qIhLjXYCqKEKsQ1ZrgnhslVr38nUkWh0biaFxPwKrlCvY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.317769 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.317776 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.317783 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e90026b5--6780--5a31--9cea--c7916e7559fe-osd--block--e90026b5--6780--5a31--9cea--c7916e7559fe', 'dm-uuid-LVM-PnLQWj1f4ROpOubC0dQiJ0Udk3o62eo2PjpyV1d2N6Q39nuZoymfRyTDp9Nioxh6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.317799 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.317805 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.317811 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.317818 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.317829 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.317840 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.317847 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.317853 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.317859 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.317866 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.317876 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.317883 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.317889 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.317904 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.317916 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.317931 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3470a12e-124f-400f-8df7-ef48fe544e4b', 'scsi-SQEMU_QEMU_HARDDISK_3470a12e-124f-400f-8df7-ef48fe544e4b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3470a12e-124f-400f-8df7-ef48fe544e4b-part1', 'scsi-SQEMU_QEMU_HARDDISK_3470a12e-124f-400f-8df7-ef48fe544e4b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3470a12e-124f-400f-8df7-ef48fe544e4b-part14', 'scsi-SQEMU_QEMU_HARDDISK_3470a12e-124f-400f-8df7-ef48fe544e4b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3470a12e-124f-400f-8df7-ef48fe544e4b-part15', 'scsi-SQEMU_QEMU_HARDDISK_3470a12e-124f-400f-8df7-ef48fe544e4b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3470a12e-124f-400f-8df7-ef48fe544e4b-part16', 'scsi-SQEMU_QEMU_HARDDISK_3470a12e-124f-400f-8df7-ef48fe544e4b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-27 01:00:41.317940 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.317946 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.317957 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c5e6c545--43c0--5a5e--9b6e--24e5d5043e2a-osd--block--c5e6c545--43c0--5a5e--9b6e--24e5d5043e2a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6xG180-8oDB-fzAy-pAEY-lUOZ-L30t-ssoe3i', 'scsi-0QEMU_QEMU_HARDDISK_c4916fb9-2e52-4262-9b09-55f9a233c222', 'scsi-SQEMU_QEMU_HARDDISK_c4916fb9-2e52-4262-9b09-55f9a233c222'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-27 01:00:41.317969 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.317976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.317982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.317989 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--15e091ae--77f4--5dd5--92b2--2aa74778b61d-osd--block--15e091ae--77f4--5dd5--92b2--2aa74778b61d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wX9ua3-ujTP-p7s8-wxQz-my6v-aSdV-BlVN7a', 'scsi-0QEMU_QEMU_HARDDISK_31dfd5e5-18cf-471e-b1c7-8ca54ae9145c', 'scsi-SQEMU_QEMU_HARDDISK_31dfd5e5-18cf-471e-b1c7-8ca54ae9145c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-27 01:00:41.318000 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.318007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.318013 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.318062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.318079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.318087 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7c486bab-939d-4b28-a8a9-5aea680a535b', 'scsi-SQEMU_QEMU_HARDDISK_7c486bab-939d-4b28-a8a9-5aea680a535b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-27 01:00:41.318100 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b66f543-9fce-4c0f-ad03-37f043f64686', 'scsi-SQEMU_QEMU_HARDDISK_7b66f543-9fce-4c0f-ad03-37f043f64686'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b66f543-9fce-4c0f-ad03-37f043f64686-part1', 'scsi-SQEMU_QEMU_HARDDISK_7b66f543-9fce-4c0f-ad03-37f043f64686-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b66f543-9fce-4c0f-ad03-37f043f64686-part14', 'scsi-SQEMU_QEMU_HARDDISK_7b66f543-9fce-4c0f-ad03-37f043f64686-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b66f543-9fce-4c0f-ad03-37f043f64686-part15', 'scsi-SQEMU_QEMU_HARDDISK_7b66f543-9fce-4c0f-ad03-37f043f64686-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b66f543-9fce-4c0f-ad03-37f043f64686-part16', 'scsi-SQEMU_QEMU_HARDDISK_7b66f543-9fce-4c0f-ad03-37f043f64686-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-27 01:00:41.318112 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d07f98ad-3d62-49f5-84e9-af5adb521297', 'scsi-SQEMU_QEMU_HARDDISK_d07f98ad-3d62-49f5-84e9-af5adb521297'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d07f98ad-3d62-49f5-84e9-af5adb521297-part1', 'scsi-SQEMU_QEMU_HARDDISK_d07f98ad-3d62-49f5-84e9-af5adb521297-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d07f98ad-3d62-49f5-84e9-af5adb521297-part14', 'scsi-SQEMU_QEMU_HARDDISK_d07f98ad-3d62-49f5-84e9-af5adb521297-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d07f98ad-3d62-49f5-84e9-af5adb521297-part15', 'scsi-SQEMU_QEMU_HARDDISK_d07f98ad-3d62-49f5-84e9-af5adb521297-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d07f98ad-3d62-49f5-84e9-af5adb521297-part16', 'scsi-SQEMU_QEMU_HARDDISK_d07f98ad-3d62-49f5-84e9-af5adb521297-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-27 01:00:41.318124 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--5630d52f--55a8--52f3--8c7d--90d730eab2c2-osd--block--5630d52f--55a8--52f3--8c7d--90d730eab2c2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-InaLzj-RS9L-jkkb-KINo-oXRf-l7yT-9o9jkD', 'scsi-0QEMU_QEMU_HARDDISK_7eee5dc0-08e1-454c-92c3-6b2c2994eeca', 'scsi-SQEMU_QEMU_HARDDISK_7eee5dc0-08e1-454c-92c3-6b2c2994eeca'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-27 01:00:41.318132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.318144 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--aa250c28--8715--5ad9--8f6a--4b8a4568e8d3-osd--block--aa250c28--8715--5ad9--8f6a--4b8a4568e8d3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-J9NBHH-zew4-pOfs-CtH8-hySc-o7NP-XT8fa2', 'scsi-0QEMU_QEMU_HARDDISK_a71caac6-92e2-45f9-9373-56e68f91355d', 'scsi-SQEMU_QEMU_HARDDISK_a71caac6-92e2-45f9-9373-56e68f91355d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-27 01:00:41.318151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.318158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.318186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4935a670-85d5-4728-bfd3-2cafc3ce60ad', 'scsi-SQEMU_QEMU_HARDDISK_4935a670-85d5-4728-bfd3-2cafc3ce60ad'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4935a670-85d5-4728-bfd3-2cafc3ce60ad-part1', 'scsi-SQEMU_QEMU_HARDDISK_4935a670-85d5-4728-bfd3-2cafc3ce60ad-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4935a670-85d5-4728-bfd3-2cafc3ce60ad-part14', 'scsi-SQEMU_QEMU_HARDDISK_4935a670-85d5-4728-bfd3-2cafc3ce60ad-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4935a670-85d5-4728-bfd3-2cafc3ce60ad-part15', 'scsi-SQEMU_QEMU_HARDDISK_4935a670-85d5-4728-bfd3-2cafc3ce60ad-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4935a670-85d5-4728-bfd3-2cafc3ce60ad-part16', 'scsi-SQEMU_QEMU_HARDDISK_4935a670-85d5-4728-bfd3-2cafc3ce60ad-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-27 01:00:41.318195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-27-00-03-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-27 01:00:41.318206 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--e90026b5--6780--5a31--9cea--c7916e7559fe-osd--block--e90026b5--6780--5a31--9cea--c7916e7559fe'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gZhBvh-1LFh-ekih-MIdg-M8Jo-TTgF-yb1n12', 'scsi-0QEMU_QEMU_HARDDISK_684e370a-eec5-4526-b882-46c5ae49497d', 'scsi-SQEMU_QEMU_HARDDISK_684e370a-eec5-4526-b882-46c5ae49497d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-27 01:00:41.318214 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--91c1f24e--fd77--555b--b1fb--5152ae0ce974-osd--block--91c1f24e--fd77--555b--b1fb--5152ae0ce974'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9GzcCV-eEi2-9iq6-7OwL-k0t4-avIt-rnCcC9', 'scsi-0QEMU_QEMU_HARDDISK_e3da6966-e430-4abd-922c-0deb6c0107da', 'scsi-SQEMU_QEMU_HARDDISK_e3da6966-e430-4abd-922c-0deb6c0107da'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-27 01:00:41.318230 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-27-00-03-18-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-27 01:00:41.318238 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_109976ce-0a0b-48dc-bf94-df447195f5f3', 'scsi-SQEMU_QEMU_HARDDISK_109976ce-0a0b-48dc-bf94-df447195f5f3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-27 01:00:41.318245 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_94dd7bd0-cf74-4f65-8a31-220357cecc47', 'scsi-SQEMU_QEMU_HARDDISK_94dd7bd0-cf74-4f65-8a31-220357cecc47'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-27 01:00:41.318252 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-27-00-03-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-27 01:00:41.318262 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-27-00-03-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-27 01:00:41.318268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.318275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.318287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.318297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.318303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.318310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.318316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.318324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.318331 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.318338 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.318353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_63f8a2f7-2c5c-47d8-abf0-9ea9e5c30cf9', 'scsi-SQEMU_QEMU_HARDDISK_63f8a2f7-2c5c-47d8-abf0-9ea9e5c30cf9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_63f8a2f7-2c5c-47d8-abf0-9ea9e5c30cf9-part1', 'scsi-SQEMU_QEMU_HARDDISK_63f8a2f7-2c5c-47d8-abf0-9ea9e5c30cf9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_63f8a2f7-2c5c-47d8-abf0-9ea9e5c30cf9-part14', 'scsi-SQEMU_QEMU_HARDDISK_63f8a2f7-2c5c-47d8-abf0-9ea9e5c30cf9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_63f8a2f7-2c5c-47d8-abf0-9ea9e5c30cf9-part15', 'scsi-SQEMU_QEMU_HARDDISK_63f8a2f7-2c5c-47d8-abf0-9ea9e5c30cf9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_63f8a2f7-2c5c-47d8-abf0-9ea9e5c30cf9-part16', 'scsi-SQEMU_QEMU_HARDDISK_63f8a2f7-2c5c-47d8-abf0-9ea9e5c30cf9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-27 01:00:41.318368 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.318375 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.318381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-27-00-03-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-27 01:00:41.318388 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.318394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.318401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.318407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.318414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.318425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.318438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.318445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.318454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:00:41.318462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26592820-9606-46fa-9763-c5d42d9ec173', 'scsi-SQEMU_QEMU_HARDDISK_26592820-9606-46fa-9763-c5d42d9ec173'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26592820-9606-46fa-9763-c5d42d9ec173-part1', 'scsi-SQEMU_QEMU_HARDDISK_26592820-9606-46fa-9763-c5d42d9ec173-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26592820-9606-46fa-9763-c5d42d9ec173-part14', 'scsi-SQEMU_QEMU_HARDDISK_26592820-9606-46fa-9763-c5d42d9ec173-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26592820-9606-46fa-9763-c5d42d9ec173-part15', 'scsi-SQEMU_QEMU_HARDDISK_26592820-9606-46fa-9763-c5d42d9ec173-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26592820-9606-46fa-9763-c5d42d9ec173-part16', 'scsi-SQEMU_QEMU_HARDDISK_26592820-9606-46fa-9763-c5d42d9ec173-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-27 01:00:41.318473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-27-00-03-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-27 01:00:41.318484 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.318491 | orchestrator | 2026-02-27 01:00:41.318497 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-27 01:00:41.318504 | orchestrator | Friday 27 February 2026 00:49:19 +0000 (0:00:02.184) 0:00:44.923 ******* 2026-02-27 01:00:41.318513 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c5e6c545--43c0--5a5e--9b6e--24e5d5043e2a-osd--block--c5e6c545--43c0--5a5e--9b6e--24e5d5043e2a', 'dm-uuid-LVM-ktZNB2qrs3DaCnLkAdNHrqYVG23HKb1FGHO1W2U1zR2CbXChmoBj0ctfCoqUzjKf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.318523 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--15e091ae--77f4--5dd5--92b2--2aa74778b61d-osd--block--15e091ae--77f4--5dd5--92b2--2aa74778b61d', 'dm-uuid-LVM-qJU288vwWpkc3KXMmYUCJORUt3aDMziKdcrQEt5vLA8Hjbzwqjl8UH3NpNbOBh11'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.318531 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.318538 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.318545 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.318559 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.318567 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.318573 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.318584 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.318591 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5630d52f--55a8--52f3--8c7d--90d730eab2c2-osd--block--5630d52f--55a8--52f3--8c7d--90d730eab2c2', 'dm-uuid-LVM-E17jWAJP6Me7aqZ4Q8UClyfqzp0zu2zwBObKfGSwewlrOjqJGlCTZm1c7oSX94jh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.318598 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e90026b5--6780--5a31--9cea--c7916e7559fe-osd--block--e90026b5--6780--5a31--9cea--c7916e7559fe', 'dm-uuid-LVM-PnLQWj1f4ROpOubC0dQiJ0Udk3o62eo2PjpyV1d2N6Q39nuZoymfRyTDp9Nioxh6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.318613 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.318620 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.318627 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.318637 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3470a12e-124f-400f-8df7-ef48fe544e4b', 'scsi-SQEMU_QEMU_HARDDISK_3470a12e-124f-400f-8df7-ef48fe544e4b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3470a12e-124f-400f-8df7-ef48fe544e4b-part1', 'scsi-SQEMU_QEMU_HARDDISK_3470a12e-124f-400f-8df7-ef48fe544e4b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3470a12e-124f-400f-8df7-ef48fe544e4b-part14', 'scsi-SQEMU_QEMU_HARDDISK_3470a12e-124f-400f-8df7-ef48fe544e4b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3470a12e-124f-400f-8df7-ef48fe544e4b-part15', 'scsi-SQEMU_QEMU_HARDDISK_3470a12e-124f-400f-8df7-ef48fe544e4b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3470a12e-124f-400f-8df7-ef48fe544e4b-part16', 'scsi-SQEMU_QEMU_HARDDISK_3470a12e-124f-400f-8df7-ef48fe544e4b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.318653 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.318661 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.318668 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.318678 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--aa250c28--8715--5ad9--8f6a--4b8a4568e8d3-osd--block--aa250c28--8715--5ad9--8f6a--4b8a4568e8d3', 'dm-uuid-LVM-ZkL6ONrrTJ7thuRkFAXmCWJ98Giu8rzf6AyCY1QlpDnyMYhjrremnq2sgAaYdddg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.318684 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.318691 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.318706 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c5e6c545--43c0--5a5e--9b6e--24e5d5043e2a-osd--block--c5e6c545--43c0--5a5e--9b6e--24e5d5043e2a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6xG180-8oDB-fzAy-pAEY-lUOZ-L30t-ssoe3i', 'scsi-0QEMU_QEMU_HARDDISK_c4916fb9-2e52-4262-9b09-55f9a233c222', 'scsi-SQEMU_QEMU_HARDDISK_c4916fb9-2e52-4262-9b09-55f9a233c222'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.318713 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.318723 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.318730 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.318736 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.318743 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--91c1f24e--fd77--555b--b1fb--5152ae0ce974-osd--block--91c1f24e--fd77--555b--b1fb--5152ae0ce974', 'dm-uuid-LVM-XRxvjDzFqVbn17VReU4qIhLjXYCqKEKsQ1ZrgnhslVr38nUkWh0biaFxPwKrlCvY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.318759 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.318766 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.318773 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.318783 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.318789 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.318795 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--15e091ae--77f4--5dd5--92b2--2aa74778b61d-osd--block--15e091ae--77f4--5dd5--92b2--2aa74778b61d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wX9ua3-ujTP-p7s8-wxQz-my6v-aSdV-BlVN7a', 'scsi-0QEMU_QEMU_HARDDISK_31dfd5e5-18cf-471e-b1c7-8ca54ae9145c', 'scsi-SQEMU_QEMU_HARDDISK_31dfd5e5-18cf-471e-b1c7-8ca54ae9145c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.318811 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.318817 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.318824 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.318833 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.318840 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.318851 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26592820-9606-46fa-9763-c5d42d9ec173', 'scsi-SQEMU_QEMU_HARDDISK_26592820-9606-46fa-9763-c5d42d9ec173'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26592820-9606-46fa-9763-c5d42d9ec173-part1', 'scsi-SQEMU_QEMU_HARDDISK_26592820-9606-46fa-9763-c5d42d9ec173-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26592820-9606-46fa-9763-c5d42d9ec173-part14', 'scsi-SQEMU_QEMU_HARDDISK_26592820-9606-46fa-9763-c5d42d9ec173-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26592820-9606-46fa-9763-c5d42d9ec173-part15', 'scsi-SQEMU_QEMU_HARDDISK_26592820-9606-46fa-9763-c5d42d9ec173-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_26592820-9606-46fa-9763-c5d42d9ec173-part16', 'scsi-SQEMU_QEMU_HARDDISK_26592820-9606-46fa-9763-c5d42d9ec173-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.318863 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.318874 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.318881 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-27-00-03-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.318892 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.318899 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.318914 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4935a670-85d5-4728-bfd3-2cafc3ce60ad', 'scsi-SQEMU_QEMU_HARDDISK_4935a670-85d5-4728-bfd3-2cafc3ce60ad'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4935a670-85d5-4728-bfd3-2cafc3ce60ad-part1', 'scsi-SQEMU_QEMU_HARDDISK_4935a670-85d5-4728-bfd3-2cafc3ce60ad-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4935a670-85d5-4728-bfd3-2cafc3ce60ad-part14', 'scsi-SQEMU_QEMU_HARDDISK_4935a670-85d5-4728-bfd3-2cafc3ce60ad-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4935a670-85d5-4728-bfd3-2cafc3ce60ad-part15', 'scsi-SQEMU_QEMU_HARDDISK_4935a670-85d5-4728-bfd3-2cafc3ce60ad-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4935a670-85d5-4728-bfd3-2cafc3ce60ad-part16', 'scsi-SQEMU_QEMU_HARDDISK_4935a670-85d5-4728-bfd3-2cafc3ce60ad-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.318922 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.318929 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7c486bab-939d-4b28-a8a9-5aea680a535b', 'scsi-SQEMU_QEMU_HARDDISK_7c486bab-939d-4b28-a8a9-5aea680a535b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.318941 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.319379 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-27-00-03-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.319452 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.319477 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.319488 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.319498 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.319506 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.319533 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.319556 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.319566 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.319576 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.319592 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_63f8a2f7-2c5c-47d8-abf0-9ea9e5c30cf9', 'scsi-SQEMU_QEMU_HARDDISK_63f8a2f7-2c5c-47d8-abf0-9ea9e5c30cf9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_63f8a2f7-2c5c-47d8-abf0-9ea9e5c30cf9-part1', 'scsi-SQEMU_QEMU_HARDDISK_63f8a2f7-2c5c-47d8-abf0-9ea9e5c30cf9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_63f8a2f7-2c5c-47d8-abf0-9ea9e5c30cf9-part14', 'scsi-SQEMU_QEMU_HARDDISK_63f8a2f7-2c5c-47d8-abf0-9ea9e5c30cf9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_63f8a2f7-2c5c-47d8-abf0-9ea9e5c30cf9-part15', 'scsi-SQEMU_QEMU_HARDDISK_63f8a2f7-2c5c-47d8-abf0-9ea9e5c30cf9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_63f8a2f7-2c5c-47d8-abf0-9ea9e5c30cf9-part16', 'scsi-SQEMU_QEMU_HARDDISK_63f8a2f7-2c5c-47d8-abf0-9ea9e5c30cf9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.319617 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-27-00-03-18-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.319628 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.319638 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-27-00-03-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.319648 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.319663 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.319673 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.319733 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.319745 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.319794 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.319819 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d07f98ad-3d62-49f5-84e9-af5adb521297', 'scsi-SQEMU_QEMU_HARDDISK_d07f98ad-3d62-49f5-84e9-af5adb521297'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d07f98ad-3d62-49f5-84e9-af5adb521297-part1', 'scsi-SQEMU_QEMU_HARDDISK_d07f98ad-3d62-49f5-84e9-af5adb521297-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d07f98ad-3d62-49f5-84e9-af5adb521297-part14', 'scsi-SQEMU_QEMU_HARDDISK_d07f98ad-3d62-49f5-84e9-af5adb521297-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d07f98ad-3d62-49f5-84e9-af5adb521297-part15', 'scsi-SQEMU_QEMU_HARDDISK_d07f98ad-3d62-49f5-84e9-af5adb521297-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d07f98ad-3d62-49f5-84e9-af5adb521297-part16', 'scsi-SQEMU_QEMU_HARDDISK_d07f98ad-3d62-49f5-84e9-af5adb521297-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.319840 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--aa250c28--8715--5ad9--8f6a--4b8a4568e8d3-osd--block--aa250c28--8715--5ad9--8f6a--4b8a4568e8d3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-J9NBHH-zew4-pOfs-CtH8-hySc-o7NP-XT8fa2', 'scsi-0QEMU_QEMU_HARDDISK_a71caac6-92e2-45f9-9373-56e68f91355d', 'scsi-SQEMU_QEMU_HARDDISK_a71caac6-92e2-45f9-9373-56e68f91355d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.319857 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--91c1f24e--fd77--555b--b1fb--5152ae0ce974-osd--block--91c1f24e--fd77--555b--b1fb--5152ae0ce974'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9GzcCV-eEi2-9iq6-7OwL-k0t4-avIt-rnCcC9', 'scsi-0QEMU_QEMU_HARDDISK_e3da6966-e430-4abd-922c-0deb6c0107da', 'scsi-SQEMU_QEMU_HARDDISK_e3da6966-e430-4abd-922c-0deb6c0107da'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.319867 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_94dd7bd0-cf74-4f65-8a31-220357cecc47', 'scsi-SQEMU_QEMU_HARDDISK_94dd7bd0-cf74-4f65-8a31-220357cecc47'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.319877 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.319890 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-27-00-03-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.319906 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.319923 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b66f543-9fce-4c0f-ad03-37f043f64686', 'scsi-SQEMU_QEMU_HARDDISK_7b66f543-9fce-4c0f-ad03-37f043f64686'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b66f543-9fce-4c0f-ad03-37f043f64686-part1', 'scsi-SQEMU_QEMU_HARDDISK_7b66f543-9fce-4c0f-ad03-37f043f64686-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b66f543-9fce-4c0f-ad03-37f043f64686-part14', 'scsi-SQEMU_QEMU_HARDDISK_7b66f543-9fce-4c0f-ad03-37f043f64686-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b66f543-9fce-4c0f-ad03-37f043f64686-part15', 'scsi-SQEMU_QEMU_HARDDISK_7b66f543-9fce-4c0f-ad03-37f043f64686-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b66f543-9fce-4c0f-ad03-37f043f64686-part16', 'scsi-SQEMU_QEMU_HARDDISK_7b66f543-9fce-4c0f-ad03-37f043f64686-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.319933 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--5630d52f--55a8--52f3--8c7d--90d730eab2c2-osd--block--5630d52f--55a8--52f3--8c7d--90d730eab2c2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-InaLzj-RS9L-jkkb-KINo-oXRf-l7yT-9o9jkD', 'scsi-0QEMU_QEMU_HARDDISK_7eee5dc0-08e1-454c-92c3-6b2c2994eeca', 'scsi-SQEMU_QEMU_HARDDISK_7eee5dc0-08e1-454c-92c3-6b2c2994eeca'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.319946 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--e90026b5--6780--5a31--9cea--c7916e7559fe-osd--block--e90026b5--6780--5a31--9cea--c7916e7559fe'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gZhBvh-1LFh-ekih-MIdg-M8Jo-TTgF-yb1n12', 'scsi-0QEMU_QEMU_HARDDISK_684e370a-eec5-4526-b882-46c5ae49497d', 'scsi-SQEMU_QEMU_HARDDISK_684e370a-eec5-4526-b882-46c5ae49497d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.319985 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_109976ce-0a0b-48dc-bf94-df447195f5f3', 'scsi-SQEMU_QEMU_HARDDISK_109976ce-0a0b-48dc-bf94-df447195f5f3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.319998 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-27-00-03-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:00:41.320009 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.320018 | orchestrator | 2026-02-27 01:00:41.320033 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-27 01:00:41.320043 | orchestrator | Friday 27 February 2026 00:49:21 +0000 (0:00:01.804) 0:00:46.728 ******* 2026-02-27 01:00:41.320209 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.320221 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.320231 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.320239 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.320249 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.320258 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.320267 | orchestrator | 2026-02-27 01:00:41.320277 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-27 01:00:41.320287 | orchestrator | Friday 27 February 2026 00:49:23 +0000 (0:00:01.843) 0:00:48.573 ******* 2026-02-27 01:00:41.320296 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.320306 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.320315 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.320323 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.320332 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.320341 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.320351 | orchestrator | 2026-02-27 01:00:41.320360 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-27 01:00:41.320369 | orchestrator | Friday 27 February 2026 00:49:24 +0000 (0:00:01.374) 0:00:49.947 ******* 2026-02-27 01:00:41.320378 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.320388 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.320397 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.320415 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.320423 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.320430 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.320439 | orchestrator | 2026-02-27 01:00:41.320447 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-27 01:00:41.320456 | orchestrator | Friday 27 February 2026 00:49:27 +0000 (0:00:02.900) 0:00:52.848 ******* 2026-02-27 01:00:41.320465 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.320474 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.320484 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.320491 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.320500 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.320508 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.320516 | orchestrator | 2026-02-27 01:00:41.320525 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-27 01:00:41.320541 | orchestrator | Friday 27 February 2026 00:49:28 +0000 (0:00:01.163) 0:00:54.012 ******* 2026-02-27 01:00:41.320551 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.320560 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.320570 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.320580 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.320588 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.320597 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.320605 | orchestrator | 2026-02-27 01:00:41.320615 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-27 01:00:41.320625 | orchestrator | Friday 27 February 2026 00:49:31 +0000 (0:00:02.603) 0:00:56.616 ******* 2026-02-27 01:00:41.320634 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.320643 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.320652 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.320662 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.320671 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.320680 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.320690 | orchestrator | 2026-02-27 01:00:41.320700 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-27 01:00:41.320737 | orchestrator | Friday 27 February 2026 00:49:33 +0000 (0:00:01.798) 0:00:58.414 ******* 2026-02-27 01:00:41.320747 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-27 01:00:41.320756 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-27 01:00:41.320765 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-27 01:00:41.320773 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-27 01:00:41.320782 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-27 01:00:41.320836 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-27 01:00:41.320848 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-27 01:00:41.320857 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-27 01:00:41.320866 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-27 01:00:41.320875 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-27 01:00:41.320884 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-27 01:00:41.320948 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-27 01:00:41.320959 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-27 01:00:41.320968 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-27 01:00:41.320978 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-27 01:00:41.320987 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-27 01:00:41.320996 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-27 01:00:41.321005 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-27 01:00:41.321114 | orchestrator | 2026-02-27 01:00:41.321125 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-27 01:00:41.321147 | orchestrator | Friday 27 February 2026 00:49:37 +0000 (0:00:04.142) 0:01:02.557 ******* 2026-02-27 01:00:41.321157 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-27 01:00:41.321216 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-27 01:00:41.321227 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-27 01:00:41.321236 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.321244 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-27 01:00:41.321254 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-27 01:00:41.321263 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-27 01:00:41.321272 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-27 01:00:41.321293 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-27 01:00:41.321303 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-27 01:00:41.321312 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.321320 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.321329 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-27 01:00:41.321338 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-27 01:00:41.321347 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-27 01:00:41.321356 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-27 01:00:41.321365 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-27 01:00:41.321374 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-27 01:00:41.321381 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.321390 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.321399 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-27 01:00:41.321406 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-27 01:00:41.321415 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-27 01:00:41.321423 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.321431 | orchestrator | 2026-02-27 01:00:41.321440 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-27 01:00:41.321447 | orchestrator | Friday 27 February 2026 00:49:38 +0000 (0:00:00.996) 0:01:03.553 ******* 2026-02-27 01:00:41.321455 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.321462 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.321470 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.321479 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-27 01:00:41.321487 | orchestrator | 2026-02-27 01:00:41.321495 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-27 01:00:41.321505 | orchestrator | Friday 27 February 2026 00:49:40 +0000 (0:00:02.587) 0:01:06.141 ******* 2026-02-27 01:00:41.321514 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.321523 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.321568 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.321580 | orchestrator | 2026-02-27 01:00:41.321590 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-27 01:00:41.321676 | orchestrator | Friday 27 February 2026 00:49:41 +0000 (0:00:00.537) 0:01:06.679 ******* 2026-02-27 01:00:41.321685 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.321694 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.321736 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.321747 | orchestrator | 2026-02-27 01:00:41.321755 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-27 01:00:41.321764 | orchestrator | Friday 27 February 2026 00:49:41 +0000 (0:00:00.614) 0:01:07.293 ******* 2026-02-27 01:00:41.321772 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.321788 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.321797 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.321806 | orchestrator | 2026-02-27 01:00:41.321814 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-27 01:00:41.321822 | orchestrator | Friday 27 February 2026 00:49:42 +0000 (0:00:01.038) 0:01:08.332 ******* 2026-02-27 01:00:41.321830 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.321838 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.321847 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.321855 | orchestrator | 2026-02-27 01:00:41.321864 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-27 01:00:41.321872 | orchestrator | Friday 27 February 2026 00:49:43 +0000 (0:00:00.685) 0:01:09.018 ******* 2026-02-27 01:00:41.321880 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-27 01:00:41.321889 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-27 01:00:41.321898 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-27 01:00:41.321907 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.321916 | orchestrator | 2026-02-27 01:00:41.321924 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-27 01:00:41.321932 | orchestrator | Friday 27 February 2026 00:49:44 +0000 (0:00:00.454) 0:01:09.473 ******* 2026-02-27 01:00:41.321940 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-27 01:00:41.321949 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-27 01:00:41.321958 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-27 01:00:41.321966 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.321974 | orchestrator | 2026-02-27 01:00:41.321982 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-27 01:00:41.321989 | orchestrator | Friday 27 February 2026 00:49:44 +0000 (0:00:00.429) 0:01:09.902 ******* 2026-02-27 01:00:41.321997 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-27 01:00:41.322005 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-27 01:00:41.322012 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-27 01:00:41.322099 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.322109 | orchestrator | 2026-02-27 01:00:41.322118 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-27 01:00:41.322127 | orchestrator | Friday 27 February 2026 00:49:44 +0000 (0:00:00.414) 0:01:10.317 ******* 2026-02-27 01:00:41.322136 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.322144 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.322152 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.322161 | orchestrator | 2026-02-27 01:00:41.322214 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-27 01:00:41.322222 | orchestrator | Friday 27 February 2026 00:49:45 +0000 (0:00:00.534) 0:01:10.852 ******* 2026-02-27 01:00:41.322231 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-27 01:00:41.322241 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-27 01:00:41.322849 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-27 01:00:41.322923 | orchestrator | 2026-02-27 01:00:41.322929 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-27 01:00:41.322935 | orchestrator | Friday 27 February 2026 00:49:46 +0000 (0:00:01.285) 0:01:12.137 ******* 2026-02-27 01:00:41.322940 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-27 01:00:41.322945 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-27 01:00:41.322949 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-27 01:00:41.322955 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-27 01:00:41.322959 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-27 01:00:41.322981 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-27 01:00:41.322985 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-27 01:00:41.322989 | orchestrator | 2026-02-27 01:00:41.322994 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-27 01:00:41.322997 | orchestrator | Friday 27 February 2026 00:49:47 +0000 (0:00:01.131) 0:01:13.268 ******* 2026-02-27 01:00:41.323001 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-27 01:00:41.323005 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-27 01:00:41.323009 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-27 01:00:41.323013 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-27 01:00:41.323017 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-27 01:00:41.323020 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-27 01:00:41.323024 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-27 01:00:41.323028 | orchestrator | 2026-02-27 01:00:41.323043 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-27 01:00:41.323047 | orchestrator | Friday 27 February 2026 00:49:50 +0000 (0:00:02.638) 0:01:15.907 ******* 2026-02-27 01:00:41.323052 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:00:41.323057 | orchestrator | 2026-02-27 01:00:41.323061 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-27 01:00:41.323065 | orchestrator | Friday 27 February 2026 00:49:51 +0000 (0:00:01.411) 0:01:17.319 ******* 2026-02-27 01:00:41.323069 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:00:41.323073 | orchestrator | 2026-02-27 01:00:41.323077 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-27 01:00:41.323080 | orchestrator | Friday 27 February 2026 00:49:53 +0000 (0:00:01.262) 0:01:18.582 ******* 2026-02-27 01:00:41.323084 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.323088 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.323092 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.323096 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.323100 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.323104 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.323107 | orchestrator | 2026-02-27 01:00:41.323111 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-27 01:00:41.323115 | orchestrator | Friday 27 February 2026 00:49:54 +0000 (0:00:01.793) 0:01:20.376 ******* 2026-02-27 01:00:41.323119 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.323123 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.323129 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.323135 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.323140 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.323146 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.323156 | orchestrator | 2026-02-27 01:00:41.323202 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-27 01:00:41.323210 | orchestrator | Friday 27 February 2026 00:49:56 +0000 (0:00:01.126) 0:01:21.502 ******* 2026-02-27 01:00:41.323216 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.323222 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.323228 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.323234 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.323240 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.323246 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.323258 | orchestrator | 2026-02-27 01:00:41.323262 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-27 01:00:41.323266 | orchestrator | Friday 27 February 2026 00:49:57 +0000 (0:00:01.437) 0:01:22.940 ******* 2026-02-27 01:00:41.323269 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.323273 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.323277 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.323280 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.323284 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.323288 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.323292 | orchestrator | 2026-02-27 01:00:41.323295 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-27 01:00:41.323300 | orchestrator | Friday 27 February 2026 00:49:58 +0000 (0:00:00.892) 0:01:23.832 ******* 2026-02-27 01:00:41.323303 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.323307 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.323311 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.323314 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.323318 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.323334 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.323338 | orchestrator | 2026-02-27 01:00:41.323342 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-27 01:00:41.323346 | orchestrator | Friday 27 February 2026 00:49:59 +0000 (0:00:01.444) 0:01:25.276 ******* 2026-02-27 01:00:41.323349 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.323353 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.323357 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.323360 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.323364 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.323368 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.323371 | orchestrator | 2026-02-27 01:00:41.323375 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-27 01:00:41.323379 | orchestrator | Friday 27 February 2026 00:50:01 +0000 (0:00:01.178) 0:01:26.455 ******* 2026-02-27 01:00:41.323383 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.323388 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.323392 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.323397 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.323401 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.323405 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.323410 | orchestrator | 2026-02-27 01:00:41.323414 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-27 01:00:41.323419 | orchestrator | Friday 27 February 2026 00:50:02 +0000 (0:00:01.114) 0:01:27.569 ******* 2026-02-27 01:00:41.323423 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.323427 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.323432 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.323437 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.323441 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.323445 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.323450 | orchestrator | 2026-02-27 01:00:41.323454 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-27 01:00:41.323459 | orchestrator | Friday 27 February 2026 00:50:03 +0000 (0:00:01.123) 0:01:28.693 ******* 2026-02-27 01:00:41.323463 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.323467 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.323472 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.323476 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.323481 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.323485 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.323490 | orchestrator | 2026-02-27 01:00:41.323498 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-27 01:00:41.323502 | orchestrator | Friday 27 February 2026 00:50:04 +0000 (0:00:01.360) 0:01:30.053 ******* 2026-02-27 01:00:41.323507 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.323516 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.323520 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.323525 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.323529 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.323534 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.323538 | orchestrator | 2026-02-27 01:00:41.323542 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-27 01:00:41.323547 | orchestrator | Friday 27 February 2026 00:50:05 +0000 (0:00:00.666) 0:01:30.719 ******* 2026-02-27 01:00:41.323551 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.323556 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.323560 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.323564 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.323569 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.323573 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.323578 | orchestrator | 2026-02-27 01:00:41.323582 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-27 01:00:41.323586 | orchestrator | Friday 27 February 2026 00:50:06 +0000 (0:00:00.779) 0:01:31.499 ******* 2026-02-27 01:00:41.323591 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.323595 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.323599 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.323603 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.323608 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.323612 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.323616 | orchestrator | 2026-02-27 01:00:41.323621 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-27 01:00:41.323625 | orchestrator | Friday 27 February 2026 00:50:06 +0000 (0:00:00.623) 0:01:32.123 ******* 2026-02-27 01:00:41.323630 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.323634 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.323638 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.323643 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.323647 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.323651 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.323656 | orchestrator | 2026-02-27 01:00:41.323660 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-27 01:00:41.323665 | orchestrator | Friday 27 February 2026 00:50:07 +0000 (0:00:01.223) 0:01:33.346 ******* 2026-02-27 01:00:41.323669 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.323673 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.323678 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.323682 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.323687 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.323691 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.323696 | orchestrator | 2026-02-27 01:00:41.323700 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-27 01:00:41.323704 | orchestrator | Friday 27 February 2026 00:50:08 +0000 (0:00:00.684) 0:01:34.031 ******* 2026-02-27 01:00:41.323709 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.323713 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.323717 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.323721 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.323726 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.323730 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.323734 | orchestrator | 2026-02-27 01:00:41.323739 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-27 01:00:41.323744 | orchestrator | Friday 27 February 2026 00:50:09 +0000 (0:00:00.878) 0:01:34.909 ******* 2026-02-27 01:00:41.323748 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.323753 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.323757 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.323762 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.323769 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.323780 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.323784 | orchestrator | 2026-02-27 01:00:41.323787 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-27 01:00:41.323791 | orchestrator | Friday 27 February 2026 00:50:10 +0000 (0:00:00.606) 0:01:35.516 ******* 2026-02-27 01:00:41.323795 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.323799 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.323802 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.323806 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.323810 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.323813 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.323817 | orchestrator | 2026-02-27 01:00:41.323821 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-27 01:00:41.323825 | orchestrator | Friday 27 February 2026 00:50:11 +0000 (0:00:01.205) 0:01:36.721 ******* 2026-02-27 01:00:41.323828 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.323832 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.323836 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.323839 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.323843 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.323846 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.323850 | orchestrator | 2026-02-27 01:00:41.323854 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-27 01:00:41.323858 | orchestrator | Friday 27 February 2026 00:50:12 +0000 (0:00:01.121) 0:01:37.842 ******* 2026-02-27 01:00:41.323861 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.323865 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.323868 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.323872 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.323876 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.323879 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.323883 | orchestrator | 2026-02-27 01:00:41.323887 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-27 01:00:41.323890 | orchestrator | Friday 27 February 2026 00:50:14 +0000 (0:00:02.392) 0:01:40.235 ******* 2026-02-27 01:00:41.323895 | orchestrator | changed: [testbed-node-4] 2026-02-27 01:00:41.323898 | orchestrator | changed: [testbed-node-3] 2026-02-27 01:00:41.323902 | orchestrator | changed: [testbed-node-5] 2026-02-27 01:00:41.323906 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:00:41.323909 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:00:41.323913 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:00:41.323917 | orchestrator | 2026-02-27 01:00:41.323923 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-27 01:00:41.323927 | orchestrator | Friday 27 February 2026 00:50:17 +0000 (0:00:02.249) 0:01:42.485 ******* 2026-02-27 01:00:41.323931 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:00:41.323934 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:00:41.323938 | orchestrator | changed: [testbed-node-3] 2026-02-27 01:00:41.323942 | orchestrator | changed: [testbed-node-4] 2026-02-27 01:00:41.323945 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:00:41.323949 | orchestrator | changed: [testbed-node-5] 2026-02-27 01:00:41.323953 | orchestrator | 2026-02-27 01:00:41.323956 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-27 01:00:41.323960 | orchestrator | Friday 27 February 2026 00:50:20 +0000 (0:00:03.596) 0:01:46.082 ******* 2026-02-27 01:00:41.323964 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:00:41.323968 | orchestrator | 2026-02-27 01:00:41.323972 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-27 01:00:41.323975 | orchestrator | Friday 27 February 2026 00:50:22 +0000 (0:00:02.081) 0:01:48.164 ******* 2026-02-27 01:00:41.323979 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.323983 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.323990 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.323994 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.323997 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.324001 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.324005 | orchestrator | 2026-02-27 01:00:41.324008 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-27 01:00:41.324012 | orchestrator | Friday 27 February 2026 00:50:24 +0000 (0:00:01.339) 0:01:49.503 ******* 2026-02-27 01:00:41.324016 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.324019 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.324023 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.324027 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.324030 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.324034 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.324038 | orchestrator | 2026-02-27 01:00:41.324041 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-27 01:00:41.324045 | orchestrator | Friday 27 February 2026 00:50:26 +0000 (0:00:01.908) 0:01:51.412 ******* 2026-02-27 01:00:41.324049 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-27 01:00:41.324053 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-27 01:00:41.324056 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-27 01:00:41.324060 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-27 01:00:41.324063 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-27 01:00:41.324067 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-27 01:00:41.324071 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-27 01:00:41.324075 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-27 01:00:41.324078 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-27 01:00:41.324082 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-27 01:00:41.324089 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-27 01:00:41.324093 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-27 01:00:41.324097 | orchestrator | 2026-02-27 01:00:41.324101 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-27 01:00:41.324104 | orchestrator | Friday 27 February 2026 00:50:28 +0000 (0:00:02.355) 0:01:53.768 ******* 2026-02-27 01:00:41.324108 | orchestrator | changed: [testbed-node-4] 2026-02-27 01:00:41.324112 | orchestrator | changed: [testbed-node-5] 2026-02-27 01:00:41.324118 | orchestrator | changed: [testbed-node-3] 2026-02-27 01:00:41.324124 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:00:41.324130 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:00:41.324136 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:00:41.324142 | orchestrator | 2026-02-27 01:00:41.324148 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-27 01:00:41.324154 | orchestrator | Friday 27 February 2026 00:50:31 +0000 (0:00:03.494) 0:01:57.263 ******* 2026-02-27 01:00:41.324160 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.324178 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.324184 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.324189 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.324194 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.324199 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.324204 | orchestrator | 2026-02-27 01:00:41.324209 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-27 01:00:41.324214 | orchestrator | Friday 27 February 2026 00:50:33 +0000 (0:00:01.933) 0:01:59.197 ******* 2026-02-27 01:00:41.324225 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.324232 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.324236 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.324240 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.324243 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.324247 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.324251 | orchestrator | 2026-02-27 01:00:41.324254 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-27 01:00:41.324258 | orchestrator | Friday 27 February 2026 00:50:35 +0000 (0:00:01.494) 0:02:00.691 ******* 2026-02-27 01:00:41.324262 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.324269 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.324273 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.324276 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.324280 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.324284 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.324287 | orchestrator | 2026-02-27 01:00:41.324291 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-27 01:00:41.324295 | orchestrator | Friday 27 February 2026 00:50:35 +0000 (0:00:00.650) 0:02:01.342 ******* 2026-02-27 01:00:41.324299 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:00:41.324303 | orchestrator | 2026-02-27 01:00:41.324306 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-27 01:00:41.324310 | orchestrator | Friday 27 February 2026 00:50:37 +0000 (0:00:01.424) 0:02:02.767 ******* 2026-02-27 01:00:41.324314 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.324317 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.324321 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.324325 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.324328 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.324332 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.324336 | orchestrator | 2026-02-27 01:00:41.324339 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-27 01:00:41.324343 | orchestrator | Friday 27 February 2026 00:51:22 +0000 (0:00:44.792) 0:02:47.559 ******* 2026-02-27 01:00:41.324347 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-27 01:00:41.324351 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-27 01:00:41.324354 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-27 01:00:41.324358 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.324362 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-27 01:00:41.324365 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-27 01:00:41.324369 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-27 01:00:41.324373 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.324376 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-27 01:00:41.324380 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-27 01:00:41.324384 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-27 01:00:41.324387 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.324391 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-27 01:00:41.324395 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-27 01:00:41.324398 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-27 01:00:41.324402 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.324406 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-27 01:00:41.324414 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-27 01:00:41.324418 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-27 01:00:41.324422 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.324429 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-27 01:00:41.324433 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-27 01:00:41.324437 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-27 01:00:41.324441 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.324444 | orchestrator | 2026-02-27 01:00:41.324448 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-27 01:00:41.324452 | orchestrator | Friday 27 February 2026 00:51:22 +0000 (0:00:00.711) 0:02:48.271 ******* 2026-02-27 01:00:41.324455 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.324459 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.324463 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.324467 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.324470 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.324474 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.324478 | orchestrator | 2026-02-27 01:00:41.324481 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-27 01:00:41.324485 | orchestrator | Friday 27 February 2026 00:51:23 +0000 (0:00:00.890) 0:02:49.162 ******* 2026-02-27 01:00:41.324489 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.324492 | orchestrator | 2026-02-27 01:00:41.324496 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-27 01:00:41.324500 | orchestrator | Friday 27 February 2026 00:51:23 +0000 (0:00:00.163) 0:02:49.325 ******* 2026-02-27 01:00:41.324504 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.324507 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.324511 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.324515 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.324518 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.324522 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.324526 | orchestrator | 2026-02-27 01:00:41.324529 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-27 01:00:41.324533 | orchestrator | Friday 27 February 2026 00:51:24 +0000 (0:00:00.719) 0:02:50.045 ******* 2026-02-27 01:00:41.324537 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.324540 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.324544 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.324548 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.324551 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.324558 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.324562 | orchestrator | 2026-02-27 01:00:41.324565 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-27 01:00:41.324569 | orchestrator | Friday 27 February 2026 00:51:25 +0000 (0:00:01.279) 0:02:51.325 ******* 2026-02-27 01:00:41.324573 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.324576 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.324580 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.324584 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.324587 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.324591 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.324595 | orchestrator | 2026-02-27 01:00:41.324599 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-27 01:00:41.324602 | orchestrator | Friday 27 February 2026 00:51:27 +0000 (0:00:01.219) 0:02:52.544 ******* 2026-02-27 01:00:41.324606 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.324610 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.324614 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.324621 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.324625 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.324628 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.324632 | orchestrator | 2026-02-27 01:00:41.324636 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-27 01:00:41.324640 | orchestrator | Friday 27 February 2026 00:51:30 +0000 (0:00:03.327) 0:02:55.871 ******* 2026-02-27 01:00:41.324643 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.324647 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.324650 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.324654 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.324658 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.324661 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.324665 | orchestrator | 2026-02-27 01:00:41.324669 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-27 01:00:41.324673 | orchestrator | Friday 27 February 2026 00:51:31 +0000 (0:00:00.967) 0:02:56.839 ******* 2026-02-27 01:00:41.324677 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:00:41.324682 | orchestrator | 2026-02-27 01:00:41.324686 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-27 01:00:41.324690 | orchestrator | Friday 27 February 2026 00:51:32 +0000 (0:00:01.535) 0:02:58.375 ******* 2026-02-27 01:00:41.324694 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.324697 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.324701 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.324705 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.324708 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.324712 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.324716 | orchestrator | 2026-02-27 01:00:41.324719 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-27 01:00:41.324723 | orchestrator | Friday 27 February 2026 00:51:34 +0000 (0:00:01.165) 0:02:59.540 ******* 2026-02-27 01:00:41.324727 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.324731 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.324734 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.324738 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.324742 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.324745 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.324749 | orchestrator | 2026-02-27 01:00:41.324753 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-27 01:00:41.324756 | orchestrator | Friday 27 February 2026 00:51:35 +0000 (0:00:00.911) 0:03:00.451 ******* 2026-02-27 01:00:41.324760 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.324764 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.324770 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.324774 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.324777 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.324781 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.324785 | orchestrator | 2026-02-27 01:00:41.324788 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-27 01:00:41.324792 | orchestrator | Friday 27 February 2026 00:51:36 +0000 (0:00:01.344) 0:03:01.796 ******* 2026-02-27 01:00:41.324796 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.324800 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.324803 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.324807 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.324811 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.324814 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.324818 | orchestrator | 2026-02-27 01:00:41.324822 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-27 01:00:41.324825 | orchestrator | Friday 27 February 2026 00:51:37 +0000 (0:00:00.765) 0:03:02.562 ******* 2026-02-27 01:00:41.324832 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.324836 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.324840 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.324843 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.324847 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.324851 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.324855 | orchestrator | 2026-02-27 01:00:41.324858 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-27 01:00:41.324862 | orchestrator | Friday 27 February 2026 00:51:38 +0000 (0:00:01.006) 0:03:03.568 ******* 2026-02-27 01:00:41.324866 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.324869 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.324873 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.324877 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.324880 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.324884 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.324888 | orchestrator | 2026-02-27 01:00:41.324891 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-27 01:00:41.324895 | orchestrator | Friday 27 February 2026 00:51:39 +0000 (0:00:01.162) 0:03:04.730 ******* 2026-02-27 01:00:41.324899 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.324903 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.324906 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.324916 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.324920 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.324924 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.324928 | orchestrator | 2026-02-27 01:00:41.324931 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-27 01:00:41.324935 | orchestrator | Friday 27 February 2026 00:51:40 +0000 (0:00:01.054) 0:03:05.785 ******* 2026-02-27 01:00:41.324939 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.324943 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.324946 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.324950 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.324954 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.324957 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.324961 | orchestrator | 2026-02-27 01:00:41.324965 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-27 01:00:41.324968 | orchestrator | Friday 27 February 2026 00:51:41 +0000 (0:00:01.009) 0:03:06.794 ******* 2026-02-27 01:00:41.324972 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.324976 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.324980 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.324983 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.324987 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.324991 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.324994 | orchestrator | 2026-02-27 01:00:41.324998 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-27 01:00:41.325002 | orchestrator | Friday 27 February 2026 00:51:43 +0000 (0:00:01.950) 0:03:08.744 ******* 2026-02-27 01:00:41.325006 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-4, testbed-node-2 2026-02-27 01:00:41.325009 | orchestrator | 2026-02-27 01:00:41.325013 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-27 01:00:41.325017 | orchestrator | Friday 27 February 2026 00:51:45 +0000 (0:00:01.776) 0:03:10.521 ******* 2026-02-27 01:00:41.325021 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-02-27 01:00:41.325025 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-02-27 01:00:41.325029 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-02-27 01:00:41.325032 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-27 01:00:41.325036 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-02-27 01:00:41.325043 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-27 01:00:41.325047 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-02-27 01:00:41.325051 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-02-27 01:00:41.325055 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-27 01:00:41.325058 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-27 01:00:41.325062 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-27 01:00:41.325066 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-27 01:00:41.325069 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-02-27 01:00:41.325073 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-27 01:00:41.325077 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-02-27 01:00:41.325081 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-27 01:00:41.325084 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-27 01:00:41.325088 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-02-27 01:00:41.325094 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-02-27 01:00:41.325098 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-27 01:00:41.325102 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-27 01:00:41.325106 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-27 01:00:41.325109 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-27 01:00:41.325114 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-02-27 01:00:41.325120 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-02-27 01:00:41.325126 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-27 01:00:41.325132 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-27 01:00:41.325139 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-27 01:00:41.325145 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-27 01:00:41.325150 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-02-27 01:00:41.325157 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-27 01:00:41.325193 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-27 01:00:41.325199 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-02-27 01:00:41.325203 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-27 01:00:41.325207 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-27 01:00:41.325210 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-02-27 01:00:41.325214 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-27 01:00:41.325218 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-27 01:00:41.325222 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-27 01:00:41.325226 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-02-27 01:00:41.325230 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-27 01:00:41.325233 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-02-27 01:00:41.325241 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-27 01:00:41.325245 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-27 01:00:41.325248 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-27 01:00:41.325252 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-02-27 01:00:41.325256 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-27 01:00:41.325259 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-27 01:00:41.325267 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-27 01:00:41.325271 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-27 01:00:41.325275 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-02-27 01:00:41.325278 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-27 01:00:41.325282 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-02-27 01:00:41.325286 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-27 01:00:41.325289 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-27 01:00:41.325293 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-27 01:00:41.325297 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-27 01:00:41.325300 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-27 01:00:41.325304 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-27 01:00:41.325308 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-27 01:00:41.325311 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-27 01:00:41.325315 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-27 01:00:41.325319 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-27 01:00:41.325322 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-27 01:00:41.325326 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-27 01:00:41.325330 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-27 01:00:41.325333 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-27 01:00:41.325337 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-27 01:00:41.325341 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-27 01:00:41.325344 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-27 01:00:41.325348 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-27 01:00:41.325352 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-27 01:00:41.325356 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-27 01:00:41.325359 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-27 01:00:41.325363 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-27 01:00:41.325367 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-27 01:00:41.325373 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-27 01:00:41.325377 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-02-27 01:00:41.325381 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-27 01:00:41.325385 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-27 01:00:41.325388 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-27 01:00:41.325392 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-27 01:00:41.325396 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-02-27 01:00:41.325400 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-27 01:00:41.325403 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-02-27 01:00:41.325407 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-02-27 01:00:41.325411 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-27 01:00:41.325415 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-27 01:00:41.325422 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-02-27 01:00:41.325426 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-02-27 01:00:41.325430 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-02-27 01:00:41.325434 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-02-27 01:00:41.325437 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-02-27 01:00:41.325441 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-02-27 01:00:41.325445 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-02-27 01:00:41.325449 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-02-27 01:00:41.325452 | orchestrator | 2026-02-27 01:00:41.325456 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-27 01:00:41.325460 | orchestrator | Friday 27 February 2026 00:51:52 +0000 (0:00:07.361) 0:03:17.882 ******* 2026-02-27 01:00:41.325464 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.325468 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.325474 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.325479 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-27 01:00:41.325483 | orchestrator | 2026-02-27 01:00:41.325487 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-27 01:00:41.325490 | orchestrator | Friday 27 February 2026 00:51:53 +0000 (0:00:01.289) 0:03:19.171 ******* 2026-02-27 01:00:41.325494 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-27 01:00:41.325499 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-27 01:00:41.325503 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-27 01:00:41.325506 | orchestrator | 2026-02-27 01:00:41.325510 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-27 01:00:41.325514 | orchestrator | Friday 27 February 2026 00:51:55 +0000 (0:00:01.400) 0:03:20.572 ******* 2026-02-27 01:00:41.325518 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-27 01:00:41.325522 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-27 01:00:41.325525 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-27 01:00:41.325529 | orchestrator | 2026-02-27 01:00:41.325533 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-27 01:00:41.325537 | orchestrator | Friday 27 February 2026 00:51:56 +0000 (0:00:01.445) 0:03:22.018 ******* 2026-02-27 01:00:41.325541 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.325544 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.325548 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.325552 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.325556 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.325559 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.325563 | orchestrator | 2026-02-27 01:00:41.325567 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-27 01:00:41.325571 | orchestrator | Friday 27 February 2026 00:51:57 +0000 (0:00:00.875) 0:03:22.893 ******* 2026-02-27 01:00:41.325574 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.325578 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.325582 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.325586 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.325589 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.325597 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.325600 | orchestrator | 2026-02-27 01:00:41.325604 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-27 01:00:41.325608 | orchestrator | Friday 27 February 2026 00:51:58 +0000 (0:00:01.340) 0:03:24.234 ******* 2026-02-27 01:00:41.325612 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.325615 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.325619 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.325623 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.325627 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.325630 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.325634 | orchestrator | 2026-02-27 01:00:41.325640 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-27 01:00:41.325644 | orchestrator | Friday 27 February 2026 00:51:59 +0000 (0:00:01.032) 0:03:25.267 ******* 2026-02-27 01:00:41.325648 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.325652 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.325655 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.325659 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.325663 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.325666 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.325670 | orchestrator | 2026-02-27 01:00:41.325674 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-27 01:00:41.325678 | orchestrator | Friday 27 February 2026 00:52:01 +0000 (0:00:01.256) 0:03:26.523 ******* 2026-02-27 01:00:41.325681 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.325685 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.325689 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.325692 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.325696 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.325700 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.325703 | orchestrator | 2026-02-27 01:00:41.325707 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-27 01:00:41.325711 | orchestrator | Friday 27 February 2026 00:52:02 +0000 (0:00:00.900) 0:03:27.424 ******* 2026-02-27 01:00:41.325715 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.325718 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.325722 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.325726 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.325729 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.325733 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.325737 | orchestrator | 2026-02-27 01:00:41.325741 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-27 01:00:41.325744 | orchestrator | Friday 27 February 2026 00:52:02 +0000 (0:00:00.760) 0:03:28.185 ******* 2026-02-27 01:00:41.325748 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.325752 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.325756 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.325759 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.325763 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.325767 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.325770 | orchestrator | 2026-02-27 01:00:41.325778 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-27 01:00:41.325782 | orchestrator | Friday 27 February 2026 00:52:03 +0000 (0:00:00.629) 0:03:28.814 ******* 2026-02-27 01:00:41.325786 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.325789 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.325793 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.325797 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.325801 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.325804 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.325808 | orchestrator | 2026-02-27 01:00:41.325815 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-27 01:00:41.325819 | orchestrator | Friday 27 February 2026 00:52:04 +0000 (0:00:00.922) 0:03:29.737 ******* 2026-02-27 01:00:41.325823 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.325826 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.325830 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.325834 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.325838 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.325842 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.325845 | orchestrator | 2026-02-27 01:00:41.325849 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-27 01:00:41.325853 | orchestrator | Friday 27 February 2026 00:52:07 +0000 (0:00:03.309) 0:03:33.047 ******* 2026-02-27 01:00:41.325857 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.325860 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.325864 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.325868 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.325871 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.325875 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.325879 | orchestrator | 2026-02-27 01:00:41.325882 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-27 01:00:41.325886 | orchestrator | Friday 27 February 2026 00:52:08 +0000 (0:00:00.945) 0:03:33.993 ******* 2026-02-27 01:00:41.325890 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.325894 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.325897 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.325901 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.325905 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.325909 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.325912 | orchestrator | 2026-02-27 01:00:41.325916 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-27 01:00:41.325920 | orchestrator | Friday 27 February 2026 00:52:09 +0000 (0:00:00.709) 0:03:34.703 ******* 2026-02-27 01:00:41.325924 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.325927 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.325931 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.325935 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.325938 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.325942 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.325946 | orchestrator | 2026-02-27 01:00:41.325949 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-27 01:00:41.325953 | orchestrator | Friday 27 February 2026 00:52:10 +0000 (0:00:01.262) 0:03:35.965 ******* 2026-02-27 01:00:41.325957 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-27 01:00:41.325961 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-27 01:00:41.325965 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-27 01:00:41.325968 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.325975 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.325979 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.325983 | orchestrator | 2026-02-27 01:00:41.325986 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-27 01:00:41.325990 | orchestrator | Friday 27 February 2026 00:52:11 +0000 (0:00:01.232) 0:03:37.197 ******* 2026-02-27 01:00:41.325995 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-02-27 01:00:41.326005 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-02-27 01:00:41.326011 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.326042 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-02-27 01:00:41.326046 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-02-27 01:00:41.326055 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.326059 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-02-27 01:00:41.326063 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-02-27 01:00:41.326067 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.326072 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.326076 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.326080 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.326083 | orchestrator | 2026-02-27 01:00:41.326087 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-27 01:00:41.326091 | orchestrator | Friday 27 February 2026 00:52:12 +0000 (0:00:01.118) 0:03:38.316 ******* 2026-02-27 01:00:41.326094 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.326098 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.326102 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.326106 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.326109 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.326114 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.326120 | orchestrator | 2026-02-27 01:00:41.326126 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-27 01:00:41.326132 | orchestrator | Friday 27 February 2026 00:52:13 +0000 (0:00:00.881) 0:03:39.197 ******* 2026-02-27 01:00:41.326139 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.326144 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.326150 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.326155 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.326161 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.326184 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.326190 | orchestrator | 2026-02-27 01:00:41.326197 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-27 01:00:41.326204 | orchestrator | Friday 27 February 2026 00:52:14 +0000 (0:00:00.948) 0:03:40.146 ******* 2026-02-27 01:00:41.326211 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.326215 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.326219 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.326223 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.326226 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.326230 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.326239 | orchestrator | 2026-02-27 01:00:41.326242 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-27 01:00:41.326246 | orchestrator | Friday 27 February 2026 00:52:15 +0000 (0:00:01.098) 0:03:41.244 ******* 2026-02-27 01:00:41.326250 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.326254 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.326257 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.326261 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.326265 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.326269 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.326272 | orchestrator | 2026-02-27 01:00:41.326276 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-27 01:00:41.326293 | orchestrator | Friday 27 February 2026 00:52:16 +0000 (0:00:00.910) 0:03:42.155 ******* 2026-02-27 01:00:41.326300 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.326306 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.326312 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.326318 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.326324 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.326330 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.326336 | orchestrator | 2026-02-27 01:00:41.326342 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-27 01:00:41.326348 | orchestrator | Friday 27 February 2026 00:52:17 +0000 (0:00:00.728) 0:03:42.883 ******* 2026-02-27 01:00:41.326355 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.326361 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.326366 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.326373 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.326378 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.326382 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.326385 | orchestrator | 2026-02-27 01:00:41.326389 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-27 01:00:41.326393 | orchestrator | Friday 27 February 2026 00:52:18 +0000 (0:00:00.900) 0:03:43.783 ******* 2026-02-27 01:00:41.326396 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-27 01:00:41.326400 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-27 01:00:41.326404 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-27 01:00:41.326408 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.326411 | orchestrator | 2026-02-27 01:00:41.326415 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-27 01:00:41.326419 | orchestrator | Friday 27 February 2026 00:52:18 +0000 (0:00:00.442) 0:03:44.225 ******* 2026-02-27 01:00:41.326423 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-27 01:00:41.326427 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-27 01:00:41.326430 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-27 01:00:41.326434 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.326438 | orchestrator | 2026-02-27 01:00:41.326441 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-27 01:00:41.326449 | orchestrator | Friday 27 February 2026 00:52:19 +0000 (0:00:00.433) 0:03:44.659 ******* 2026-02-27 01:00:41.326453 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-27 01:00:41.326456 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-27 01:00:41.326460 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-27 01:00:41.326464 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.326467 | orchestrator | 2026-02-27 01:00:41.326471 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-27 01:00:41.326475 | orchestrator | Friday 27 February 2026 00:52:19 +0000 (0:00:00.470) 0:03:45.130 ******* 2026-02-27 01:00:41.326479 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.326482 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.326490 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.326494 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.326498 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.326501 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.326505 | orchestrator | 2026-02-27 01:00:41.326509 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-27 01:00:41.326512 | orchestrator | Friday 27 February 2026 00:52:20 +0000 (0:00:00.812) 0:03:45.942 ******* 2026-02-27 01:00:41.326516 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-27 01:00:41.326520 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-27 01:00:41.326523 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-27 01:00:41.326528 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-27 01:00:41.326531 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.326535 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-27 01:00:41.326539 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.326542 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-27 01:00:41.326546 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.326550 | orchestrator | 2026-02-27 01:00:41.326553 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-27 01:00:41.326557 | orchestrator | Friday 27 February 2026 00:52:23 +0000 (0:00:02.453) 0:03:48.396 ******* 2026-02-27 01:00:41.326561 | orchestrator | changed: [testbed-node-3] 2026-02-27 01:00:41.326564 | orchestrator | changed: [testbed-node-4] 2026-02-27 01:00:41.326568 | orchestrator | changed: [testbed-node-5] 2026-02-27 01:00:41.326572 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:00:41.326575 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:00:41.326579 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:00:41.326583 | orchestrator | 2026-02-27 01:00:41.326586 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-27 01:00:41.326590 | orchestrator | Friday 27 February 2026 00:52:26 +0000 (0:00:03.695) 0:03:52.092 ******* 2026-02-27 01:00:41.326594 | orchestrator | changed: [testbed-node-3] 2026-02-27 01:00:41.326597 | orchestrator | changed: [testbed-node-5] 2026-02-27 01:00:41.326601 | orchestrator | changed: [testbed-node-4] 2026-02-27 01:00:41.326605 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:00:41.326608 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:00:41.326612 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:00:41.326616 | orchestrator | 2026-02-27 01:00:41.326619 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-27 01:00:41.326623 | orchestrator | Friday 27 February 2026 00:52:28 +0000 (0:00:01.385) 0:03:53.477 ******* 2026-02-27 01:00:41.326627 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.326630 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.326634 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.326638 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:00:41.326642 | orchestrator | 2026-02-27 01:00:41.326646 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-02-27 01:00:41.326653 | orchestrator | Friday 27 February 2026 00:52:29 +0000 (0:00:01.092) 0:03:54.570 ******* 2026-02-27 01:00:41.326657 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.326660 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.326664 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.326668 | orchestrator | 2026-02-27 01:00:41.326671 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-02-27 01:00:41.326675 | orchestrator | Friday 27 February 2026 00:52:29 +0000 (0:00:00.381) 0:03:54.951 ******* 2026-02-27 01:00:41.326679 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:00:41.326683 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:00:41.326687 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:00:41.326690 | orchestrator | 2026-02-27 01:00:41.326694 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-02-27 01:00:41.326701 | orchestrator | Friday 27 February 2026 00:52:31 +0000 (0:00:01.685) 0:03:56.636 ******* 2026-02-27 01:00:41.326705 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-27 01:00:41.326709 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-27 01:00:41.326713 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-27 01:00:41.326716 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.326720 | orchestrator | 2026-02-27 01:00:41.326724 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-02-27 01:00:41.326727 | orchestrator | Friday 27 February 2026 00:52:31 +0000 (0:00:00.737) 0:03:57.374 ******* 2026-02-27 01:00:41.326731 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.326735 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.326738 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.326742 | orchestrator | 2026-02-27 01:00:41.326746 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-27 01:00:41.326750 | orchestrator | Friday 27 February 2026 00:52:32 +0000 (0:00:00.375) 0:03:57.749 ******* 2026-02-27 01:00:41.326753 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.326757 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.326761 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.326764 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-27 01:00:41.326768 | orchestrator | 2026-02-27 01:00:41.326772 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-02-27 01:00:41.326781 | orchestrator | Friday 27 February 2026 00:52:33 +0000 (0:00:01.119) 0:03:58.869 ******* 2026-02-27 01:00:41.326784 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-27 01:00:41.326788 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-27 01:00:41.326792 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-27 01:00:41.326796 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.326799 | orchestrator | 2026-02-27 01:00:41.326803 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-02-27 01:00:41.326807 | orchestrator | Friday 27 February 2026 00:52:33 +0000 (0:00:00.452) 0:03:59.321 ******* 2026-02-27 01:00:41.326811 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.326814 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.326818 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.326822 | orchestrator | 2026-02-27 01:00:41.326825 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-02-27 01:00:41.326829 | orchestrator | Friday 27 February 2026 00:52:34 +0000 (0:00:00.370) 0:03:59.692 ******* 2026-02-27 01:00:41.326833 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.326836 | orchestrator | 2026-02-27 01:00:41.326840 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-02-27 01:00:41.326844 | orchestrator | Friday 27 February 2026 00:52:34 +0000 (0:00:00.262) 0:03:59.955 ******* 2026-02-27 01:00:41.326848 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.326851 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.326855 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.326859 | orchestrator | 2026-02-27 01:00:41.326862 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-02-27 01:00:41.326866 | orchestrator | Friday 27 February 2026 00:52:34 +0000 (0:00:00.318) 0:04:00.273 ******* 2026-02-27 01:00:41.326870 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.326873 | orchestrator | 2026-02-27 01:00:41.326877 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-02-27 01:00:41.326881 | orchestrator | Friday 27 February 2026 00:52:35 +0000 (0:00:00.222) 0:04:00.495 ******* 2026-02-27 01:00:41.326885 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.326888 | orchestrator | 2026-02-27 01:00:41.326892 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-02-27 01:00:41.326899 | orchestrator | Friday 27 February 2026 00:52:35 +0000 (0:00:00.240) 0:04:00.735 ******* 2026-02-27 01:00:41.326903 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.326907 | orchestrator | 2026-02-27 01:00:41.326910 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-02-27 01:00:41.326914 | orchestrator | Friday 27 February 2026 00:52:35 +0000 (0:00:00.143) 0:04:00.879 ******* 2026-02-27 01:00:41.326918 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.326921 | orchestrator | 2026-02-27 01:00:41.326925 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-02-27 01:00:41.326929 | orchestrator | Friday 27 February 2026 00:52:36 +0000 (0:00:00.767) 0:04:01.646 ******* 2026-02-27 01:00:41.326932 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.326936 | orchestrator | 2026-02-27 01:00:41.326940 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-02-27 01:00:41.326943 | orchestrator | Friday 27 February 2026 00:52:36 +0000 (0:00:00.250) 0:04:01.897 ******* 2026-02-27 01:00:41.326947 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-27 01:00:41.326951 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-27 01:00:41.326955 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-27 01:00:41.326958 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.326962 | orchestrator | 2026-02-27 01:00:41.326966 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-02-27 01:00:41.326972 | orchestrator | Friday 27 February 2026 00:52:36 +0000 (0:00:00.458) 0:04:02.356 ******* 2026-02-27 01:00:41.326976 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.326979 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.326983 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.326987 | orchestrator | 2026-02-27 01:00:41.326990 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-02-27 01:00:41.326994 | orchestrator | Friday 27 February 2026 00:52:37 +0000 (0:00:00.444) 0:04:02.800 ******* 2026-02-27 01:00:41.326998 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.327002 | orchestrator | 2026-02-27 01:00:41.327005 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-02-27 01:00:41.327009 | orchestrator | Friday 27 February 2026 00:52:37 +0000 (0:00:00.260) 0:04:03.060 ******* 2026-02-27 01:00:41.327013 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.327016 | orchestrator | 2026-02-27 01:00:41.327020 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-27 01:00:41.327024 | orchestrator | Friday 27 February 2026 00:52:37 +0000 (0:00:00.254) 0:04:03.314 ******* 2026-02-27 01:00:41.327027 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.327031 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.327035 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.327039 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-27 01:00:41.327042 | orchestrator | 2026-02-27 01:00:41.327046 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-02-27 01:00:41.327050 | orchestrator | Friday 27 February 2026 00:52:39 +0000 (0:00:01.306) 0:04:04.621 ******* 2026-02-27 01:00:41.327053 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.327057 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.327061 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.327064 | orchestrator | 2026-02-27 01:00:41.327068 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-02-27 01:00:41.327072 | orchestrator | Friday 27 February 2026 00:52:39 +0000 (0:00:00.339) 0:04:04.961 ******* 2026-02-27 01:00:41.327076 | orchestrator | changed: [testbed-node-3] 2026-02-27 01:00:41.327079 | orchestrator | changed: [testbed-node-4] 2026-02-27 01:00:41.327083 | orchestrator | changed: [testbed-node-5] 2026-02-27 01:00:41.327087 | orchestrator | 2026-02-27 01:00:41.327091 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-02-27 01:00:41.327100 | orchestrator | Friday 27 February 2026 00:52:40 +0000 (0:00:01.127) 0:04:06.089 ******* 2026-02-27 01:00:41.327104 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-27 01:00:41.327108 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-27 01:00:41.327111 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-27 01:00:41.327117 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.327123 | orchestrator | 2026-02-27 01:00:41.327129 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-02-27 01:00:41.327135 | orchestrator | Friday 27 February 2026 00:52:41 +0000 (0:00:00.850) 0:04:06.939 ******* 2026-02-27 01:00:41.327141 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.327148 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.327153 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.327159 | orchestrator | 2026-02-27 01:00:41.327182 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-27 01:00:41.327186 | orchestrator | Friday 27 February 2026 00:52:42 +0000 (0:00:00.568) 0:04:07.508 ******* 2026-02-27 01:00:41.327190 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.327194 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.327197 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.327201 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-27 01:00:41.327205 | orchestrator | 2026-02-27 01:00:41.327209 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-02-27 01:00:41.327215 | orchestrator | Friday 27 February 2026 00:52:43 +0000 (0:00:00.912) 0:04:08.420 ******* 2026-02-27 01:00:41.327221 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.327227 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.327233 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.327239 | orchestrator | 2026-02-27 01:00:41.327246 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-02-27 01:00:41.327254 | orchestrator | Friday 27 February 2026 00:52:43 +0000 (0:00:00.634) 0:04:09.055 ******* 2026-02-27 01:00:41.327260 | orchestrator | changed: [testbed-node-3] 2026-02-27 01:00:41.327266 | orchestrator | changed: [testbed-node-4] 2026-02-27 01:00:41.327271 | orchestrator | changed: [testbed-node-5] 2026-02-27 01:00:41.327277 | orchestrator | 2026-02-27 01:00:41.327284 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-02-27 01:00:41.327290 | orchestrator | Friday 27 February 2026 00:52:44 +0000 (0:00:01.271) 0:04:10.327 ******* 2026-02-27 01:00:41.327296 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-27 01:00:41.327302 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-27 01:00:41.327308 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-27 01:00:41.327314 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.327320 | orchestrator | 2026-02-27 01:00:41.327328 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-02-27 01:00:41.327332 | orchestrator | Friday 27 February 2026 00:52:45 +0000 (0:00:00.633) 0:04:10.961 ******* 2026-02-27 01:00:41.327336 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.327339 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.327343 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.327347 | orchestrator | 2026-02-27 01:00:41.327351 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-02-27 01:00:41.327354 | orchestrator | Friday 27 February 2026 00:52:45 +0000 (0:00:00.370) 0:04:11.331 ******* 2026-02-27 01:00:41.327358 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.327362 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.327366 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.327369 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.327374 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.327384 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.327397 | orchestrator | 2026-02-27 01:00:41.327404 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-27 01:00:41.327410 | orchestrator | Friday 27 February 2026 00:52:46 +0000 (0:00:00.999) 0:04:12.331 ******* 2026-02-27 01:00:41.327416 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.327423 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.327429 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.327435 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:00:41.327442 | orchestrator | 2026-02-27 01:00:41.327447 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-02-27 01:00:41.327454 | orchestrator | Friday 27 February 2026 00:52:48 +0000 (0:00:01.084) 0:04:13.416 ******* 2026-02-27 01:00:41.327460 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.327466 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.327472 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.327478 | orchestrator | 2026-02-27 01:00:41.327484 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-02-27 01:00:41.327488 | orchestrator | Friday 27 February 2026 00:52:48 +0000 (0:00:00.628) 0:04:14.044 ******* 2026-02-27 01:00:41.327492 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:00:41.327495 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:00:41.327499 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:00:41.327503 | orchestrator | 2026-02-27 01:00:41.327506 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-02-27 01:00:41.327510 | orchestrator | Friday 27 February 2026 00:52:50 +0000 (0:00:01.478) 0:04:15.522 ******* 2026-02-27 01:00:41.327514 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-27 01:00:41.327517 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-27 01:00:41.327521 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-27 01:00:41.327525 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.327528 | orchestrator | 2026-02-27 01:00:41.327532 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-02-27 01:00:41.327536 | orchestrator | Friday 27 February 2026 00:52:50 +0000 (0:00:00.631) 0:04:16.154 ******* 2026-02-27 01:00:41.327540 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.327544 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.327551 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.327555 | orchestrator | 2026-02-27 01:00:41.327559 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-02-27 01:00:41.327562 | orchestrator | 2026-02-27 01:00:41.327566 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-27 01:00:41.327570 | orchestrator | Friday 27 February 2026 00:52:51 +0000 (0:00:00.644) 0:04:16.798 ******* 2026-02-27 01:00:41.327573 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:00:41.327577 | orchestrator | 2026-02-27 01:00:41.327581 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-27 01:00:41.327585 | orchestrator | Friday 27 February 2026 00:52:52 +0000 (0:00:01.247) 0:04:18.046 ******* 2026-02-27 01:00:41.327589 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:00:41.327592 | orchestrator | 2026-02-27 01:00:41.327596 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-27 01:00:41.327600 | orchestrator | Friday 27 February 2026 00:52:53 +0000 (0:00:00.537) 0:04:18.583 ******* 2026-02-27 01:00:41.327603 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.327607 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.327611 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.327614 | orchestrator | 2026-02-27 01:00:41.327618 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-27 01:00:41.327622 | orchestrator | Friday 27 February 2026 00:52:54 +0000 (0:00:01.209) 0:04:19.792 ******* 2026-02-27 01:00:41.327631 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.327635 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.327638 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.327642 | orchestrator | 2026-02-27 01:00:41.327646 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-27 01:00:41.327650 | orchestrator | Friday 27 February 2026 00:52:54 +0000 (0:00:00.346) 0:04:20.139 ******* 2026-02-27 01:00:41.327653 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.327657 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.327661 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.327664 | orchestrator | 2026-02-27 01:00:41.327668 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-27 01:00:41.327672 | orchestrator | Friday 27 February 2026 00:52:55 +0000 (0:00:00.519) 0:04:20.658 ******* 2026-02-27 01:00:41.327676 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.327679 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.327683 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.327687 | orchestrator | 2026-02-27 01:00:41.327690 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-27 01:00:41.327694 | orchestrator | Friday 27 February 2026 00:52:55 +0000 (0:00:00.360) 0:04:21.019 ******* 2026-02-27 01:00:41.327698 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.327701 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.327705 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.327709 | orchestrator | 2026-02-27 01:00:41.327713 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-27 01:00:41.327716 | orchestrator | Friday 27 February 2026 00:52:56 +0000 (0:00:01.142) 0:04:22.161 ******* 2026-02-27 01:00:41.327720 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.327724 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.327727 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.327731 | orchestrator | 2026-02-27 01:00:41.327735 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-27 01:00:41.327739 | orchestrator | Friday 27 February 2026 00:52:57 +0000 (0:00:00.365) 0:04:22.527 ******* 2026-02-27 01:00:41.327746 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.327749 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.327753 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.327757 | orchestrator | 2026-02-27 01:00:41.327761 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-27 01:00:41.327764 | orchestrator | Friday 27 February 2026 00:52:57 +0000 (0:00:00.307) 0:04:22.834 ******* 2026-02-27 01:00:41.327768 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.327772 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.327775 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.327779 | orchestrator | 2026-02-27 01:00:41.327783 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-27 01:00:41.327787 | orchestrator | Friday 27 February 2026 00:52:58 +0000 (0:00:00.810) 0:04:23.645 ******* 2026-02-27 01:00:41.327790 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.327794 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.327798 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.327802 | orchestrator | 2026-02-27 01:00:41.327805 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-27 01:00:41.327809 | orchestrator | Friday 27 February 2026 00:52:59 +0000 (0:00:01.154) 0:04:24.799 ******* 2026-02-27 01:00:41.327813 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.327817 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.327820 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.327824 | orchestrator | 2026-02-27 01:00:41.327828 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-27 01:00:41.327831 | orchestrator | Friday 27 February 2026 00:52:59 +0000 (0:00:00.329) 0:04:25.128 ******* 2026-02-27 01:00:41.327835 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.327844 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.327848 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.327852 | orchestrator | 2026-02-27 01:00:41.327855 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-27 01:00:41.327859 | orchestrator | Friday 27 February 2026 00:53:00 +0000 (0:00:00.361) 0:04:25.489 ******* 2026-02-27 01:00:41.327863 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.327867 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.327871 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.327874 | orchestrator | 2026-02-27 01:00:41.327878 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-27 01:00:41.327885 | orchestrator | Friday 27 February 2026 00:53:00 +0000 (0:00:00.352) 0:04:25.842 ******* 2026-02-27 01:00:41.327889 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.327892 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.327896 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.327900 | orchestrator | 2026-02-27 01:00:41.327904 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-27 01:00:41.327907 | orchestrator | Friday 27 February 2026 00:53:00 +0000 (0:00:00.394) 0:04:26.237 ******* 2026-02-27 01:00:41.327911 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.327915 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.327918 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.327922 | orchestrator | 2026-02-27 01:00:41.327926 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-27 01:00:41.327930 | orchestrator | Friday 27 February 2026 00:53:01 +0000 (0:00:00.768) 0:04:27.005 ******* 2026-02-27 01:00:41.327933 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.327937 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.327941 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.327945 | orchestrator | 2026-02-27 01:00:41.327948 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-27 01:00:41.327952 | orchestrator | Friday 27 February 2026 00:53:01 +0000 (0:00:00.368) 0:04:27.374 ******* 2026-02-27 01:00:41.327956 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.327960 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.327963 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.327967 | orchestrator | 2026-02-27 01:00:41.327971 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-27 01:00:41.327975 | orchestrator | Friday 27 February 2026 00:53:02 +0000 (0:00:00.450) 0:04:27.824 ******* 2026-02-27 01:00:41.327978 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.327982 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.327986 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.327990 | orchestrator | 2026-02-27 01:00:41.327993 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-27 01:00:41.327997 | orchestrator | Friday 27 February 2026 00:53:03 +0000 (0:00:00.943) 0:04:28.767 ******* 2026-02-27 01:00:41.328001 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.328005 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.328008 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.328012 | orchestrator | 2026-02-27 01:00:41.328016 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-27 01:00:41.328020 | orchestrator | Friday 27 February 2026 00:53:04 +0000 (0:00:00.720) 0:04:29.488 ******* 2026-02-27 01:00:41.328023 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.328027 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.328031 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.328034 | orchestrator | 2026-02-27 01:00:41.328038 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-27 01:00:41.328042 | orchestrator | Friday 27 February 2026 00:53:04 +0000 (0:00:00.616) 0:04:30.104 ******* 2026-02-27 01:00:41.328046 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.328049 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.328053 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.328063 | orchestrator | 2026-02-27 01:00:41.328067 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-27 01:00:41.328070 | orchestrator | Friday 27 February 2026 00:53:05 +0000 (0:00:00.355) 0:04:30.460 ******* 2026-02-27 01:00:41.328074 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:00:41.328078 | orchestrator | 2026-02-27 01:00:41.328082 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-27 01:00:41.328085 | orchestrator | Friday 27 February 2026 00:53:05 +0000 (0:00:00.866) 0:04:31.327 ******* 2026-02-27 01:00:41.328089 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.328093 | orchestrator | 2026-02-27 01:00:41.328100 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-27 01:00:41.328103 | orchestrator | Friday 27 February 2026 00:53:06 +0000 (0:00:00.188) 0:04:31.515 ******* 2026-02-27 01:00:41.328107 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-27 01:00:41.328111 | orchestrator | 2026-02-27 01:00:41.328117 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-27 01:00:41.328123 | orchestrator | Friday 27 February 2026 00:53:07 +0000 (0:00:01.219) 0:04:32.735 ******* 2026-02-27 01:00:41.328129 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.328135 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.328141 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.328147 | orchestrator | 2026-02-27 01:00:41.328153 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-27 01:00:41.328159 | orchestrator | Friday 27 February 2026 00:53:07 +0000 (0:00:00.405) 0:04:33.140 ******* 2026-02-27 01:00:41.328201 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.328206 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.328210 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.328213 | orchestrator | 2026-02-27 01:00:41.328217 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-27 01:00:41.328221 | orchestrator | Friday 27 February 2026 00:53:08 +0000 (0:00:00.701) 0:04:33.842 ******* 2026-02-27 01:00:41.328225 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:00:41.328228 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:00:41.328232 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:00:41.328236 | orchestrator | 2026-02-27 01:00:41.328240 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-27 01:00:41.328244 | orchestrator | Friday 27 February 2026 00:53:10 +0000 (0:00:01.560) 0:04:35.402 ******* 2026-02-27 01:00:41.328247 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:00:41.328251 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:00:41.328255 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:00:41.328259 | orchestrator | 2026-02-27 01:00:41.328263 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-27 01:00:41.328266 | orchestrator | Friday 27 February 2026 00:53:10 +0000 (0:00:00.974) 0:04:36.376 ******* 2026-02-27 01:00:41.328270 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:00:41.328274 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:00:41.328278 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:00:41.328282 | orchestrator | 2026-02-27 01:00:41.328289 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-27 01:00:41.328293 | orchestrator | Friday 27 February 2026 00:53:11 +0000 (0:00:00.853) 0:04:37.230 ******* 2026-02-27 01:00:41.328297 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.328301 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.328305 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.328309 | orchestrator | 2026-02-27 01:00:41.328312 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-27 01:00:41.328316 | orchestrator | Friday 27 February 2026 00:53:12 +0000 (0:00:00.767) 0:04:37.997 ******* 2026-02-27 01:00:41.328320 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:00:41.328324 | orchestrator | 2026-02-27 01:00:41.328328 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-27 01:00:41.328336 | orchestrator | Friday 27 February 2026 00:53:14 +0000 (0:00:01.766) 0:04:39.764 ******* 2026-02-27 01:00:41.328340 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.328344 | orchestrator | 2026-02-27 01:00:41.328348 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-27 01:00:41.328351 | orchestrator | Friday 27 February 2026 00:53:15 +0000 (0:00:00.769) 0:04:40.533 ******* 2026-02-27 01:00:41.328355 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-27 01:00:41.328359 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-27 01:00:41.328363 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-27 01:00:41.328367 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-02-27 01:00:41.328370 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-27 01:00:41.328374 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-27 01:00:41.328378 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-27 01:00:41.328382 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-02-27 01:00:41.328386 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-27 01:00:41.328390 | orchestrator | changed: [testbed-node-1 -> {{ item }}] 2026-02-27 01:00:41.328394 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-27 01:00:41.328398 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-02-27 01:00:41.328402 | orchestrator | 2026-02-27 01:00:41.328405 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-27 01:00:41.328409 | orchestrator | Friday 27 February 2026 00:53:18 +0000 (0:00:03.517) 0:04:44.051 ******* 2026-02-27 01:00:41.328413 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:00:41.328417 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:00:41.328421 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:00:41.328424 | orchestrator | 2026-02-27 01:00:41.328428 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-27 01:00:41.328432 | orchestrator | Friday 27 February 2026 00:53:21 +0000 (0:00:02.405) 0:04:46.456 ******* 2026-02-27 01:00:41.328436 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.328439 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.328443 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.328447 | orchestrator | 2026-02-27 01:00:41.328453 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-27 01:00:41.328459 | orchestrator | Friday 27 February 2026 00:53:22 +0000 (0:00:00.970) 0:04:47.427 ******* 2026-02-27 01:00:41.328465 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.328471 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.328477 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.328483 | orchestrator | 2026-02-27 01:00:41.328490 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-27 01:00:41.328495 | orchestrator | Friday 27 February 2026 00:53:23 +0000 (0:00:01.217) 0:04:48.644 ******* 2026-02-27 01:00:41.328506 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:00:41.328511 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:00:41.328517 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:00:41.328522 | orchestrator | 2026-02-27 01:00:41.328529 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-27 01:00:41.328535 | orchestrator | Friday 27 February 2026 00:53:25 +0000 (0:00:02.280) 0:04:50.925 ******* 2026-02-27 01:00:41.328541 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:00:41.328547 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:00:41.328553 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:00:41.328559 | orchestrator | 2026-02-27 01:00:41.328566 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-27 01:00:41.328572 | orchestrator | Friday 27 February 2026 00:53:27 +0000 (0:00:01.660) 0:04:52.586 ******* 2026-02-27 01:00:41.328584 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.328589 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.328593 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.328597 | orchestrator | 2026-02-27 01:00:41.328601 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-27 01:00:41.328606 | orchestrator | Friday 27 February 2026 00:53:27 +0000 (0:00:00.371) 0:04:52.957 ******* 2026-02-27 01:00:41.328612 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:00:41.328618 | orchestrator | 2026-02-27 01:00:41.328624 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-27 01:00:41.328630 | orchestrator | Friday 27 February 2026 00:53:28 +0000 (0:00:00.835) 0:04:53.793 ******* 2026-02-27 01:00:41.328636 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.328643 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.328649 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.328655 | orchestrator | 2026-02-27 01:00:41.328662 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-27 01:00:41.328668 | orchestrator | Friday 27 February 2026 00:53:28 +0000 (0:00:00.288) 0:04:54.082 ******* 2026-02-27 01:00:41.328673 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.328680 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.328685 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.328689 | orchestrator | 2026-02-27 01:00:41.328693 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-27 01:00:41.328700 | orchestrator | Friday 27 February 2026 00:53:28 +0000 (0:00:00.284) 0:04:54.366 ******* 2026-02-27 01:00:41.328704 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:00:41.328707 | orchestrator | 2026-02-27 01:00:41.328711 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-27 01:00:41.328715 | orchestrator | Friday 27 February 2026 00:53:29 +0000 (0:00:00.958) 0:04:55.324 ******* 2026-02-27 01:00:41.328719 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:00:41.328722 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:00:41.328726 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:00:41.328730 | orchestrator | 2026-02-27 01:00:41.328734 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-27 01:00:41.328737 | orchestrator | Friday 27 February 2026 00:53:32 +0000 (0:00:02.726) 0:04:58.051 ******* 2026-02-27 01:00:41.328741 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:00:41.328745 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:00:41.328748 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:00:41.328752 | orchestrator | 2026-02-27 01:00:41.328756 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-27 01:00:41.328759 | orchestrator | Friday 27 February 2026 00:53:34 +0000 (0:00:01.865) 0:04:59.916 ******* 2026-02-27 01:00:41.328763 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:00:41.328767 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:00:41.328770 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:00:41.328774 | orchestrator | 2026-02-27 01:00:41.328778 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-27 01:00:41.328782 | orchestrator | Friday 27 February 2026 00:53:36 +0000 (0:00:02.211) 0:05:02.128 ******* 2026-02-27 01:00:41.328785 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:00:41.328789 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:00:41.328793 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:00:41.328797 | orchestrator | 2026-02-27 01:00:41.328800 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-27 01:00:41.328804 | orchestrator | Friday 27 February 2026 00:53:39 +0000 (0:00:02.450) 0:05:04.578 ******* 2026-02-27 01:00:41.328808 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:00:41.328817 | orchestrator | 2026-02-27 01:00:41.328821 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-27 01:00:41.328825 | orchestrator | Friday 27 February 2026 00:53:39 +0000 (0:00:00.713) 0:05:05.291 ******* 2026-02-27 01:00:41.328828 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-02-27 01:00:41.328832 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.328836 | orchestrator | 2026-02-27 01:00:41.328840 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-27 01:00:41.328843 | orchestrator | Friday 27 February 2026 00:54:02 +0000 (0:00:22.145) 0:05:27.437 ******* 2026-02-27 01:00:41.328847 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.328851 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.328854 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.328858 | orchestrator | 2026-02-27 01:00:41.328862 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-27 01:00:41.328865 | orchestrator | Friday 27 February 2026 00:54:11 +0000 (0:00:09.274) 0:05:36.712 ******* 2026-02-27 01:00:41.328869 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.328873 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.328876 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.328880 | orchestrator | 2026-02-27 01:00:41.328884 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-27 01:00:41.328892 | orchestrator | Friday 27 February 2026 00:54:11 +0000 (0:00:00.575) 0:05:37.287 ******* 2026-02-27 01:00:41.328898 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__80bcf851acc0a1bb0fadf6e17f84691042ab6f1e'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-27 01:00:41.328904 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__80bcf851acc0a1bb0fadf6e17f84691042ab6f1e'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-27 01:00:41.328910 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__80bcf851acc0a1bb0fadf6e17f84691042ab6f1e'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-27 01:00:41.328916 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__80bcf851acc0a1bb0fadf6e17f84691042ab6f1e'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-27 01:00:41.328925 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__80bcf851acc0a1bb0fadf6e17f84691042ab6f1e'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-27 01:00:41.328930 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__80bcf851acc0a1bb0fadf6e17f84691042ab6f1e'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__80bcf851acc0a1bb0fadf6e17f84691042ab6f1e'}])  2026-02-27 01:00:41.328940 | orchestrator | 2026-02-27 01:00:41.328944 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-27 01:00:41.328947 | orchestrator | Friday 27 February 2026 00:54:27 +0000 (0:00:15.125) 0:05:52.413 ******* 2026-02-27 01:00:41.328951 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.328955 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.328959 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.328962 | orchestrator | 2026-02-27 01:00:41.328966 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-27 01:00:41.328970 | orchestrator | Friday 27 February 2026 00:54:27 +0000 (0:00:00.358) 0:05:52.771 ******* 2026-02-27 01:00:41.328974 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:00:41.328977 | orchestrator | 2026-02-27 01:00:41.328981 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-02-27 01:00:41.328985 | orchestrator | Friday 27 February 2026 00:54:28 +0000 (0:00:00.844) 0:05:53.616 ******* 2026-02-27 01:00:41.328988 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.328992 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.328996 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.328999 | orchestrator | 2026-02-27 01:00:41.329003 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-02-27 01:00:41.329007 | orchestrator | Friday 27 February 2026 00:54:28 +0000 (0:00:00.341) 0:05:53.957 ******* 2026-02-27 01:00:41.329011 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.329014 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.329018 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.329022 | orchestrator | 2026-02-27 01:00:41.329025 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-02-27 01:00:41.329029 | orchestrator | Friday 27 February 2026 00:54:28 +0000 (0:00:00.377) 0:05:54.335 ******* 2026-02-27 01:00:41.329033 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-27 01:00:41.329036 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-27 01:00:41.329040 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-27 01:00:41.329044 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.329048 | orchestrator | 2026-02-27 01:00:41.329051 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-02-27 01:00:41.329055 | orchestrator | Friday 27 February 2026 00:54:30 +0000 (0:00:01.243) 0:05:55.579 ******* 2026-02-27 01:00:41.329059 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.329065 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.329069 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.329072 | orchestrator | 2026-02-27 01:00:41.329076 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-02-27 01:00:41.329080 | orchestrator | 2026-02-27 01:00:41.329084 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-27 01:00:41.329087 | orchestrator | Friday 27 February 2026 00:54:30 +0000 (0:00:00.622) 0:05:56.201 ******* 2026-02-27 01:00:41.329091 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:00:41.329095 | orchestrator | 2026-02-27 01:00:41.329099 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-27 01:00:41.329102 | orchestrator | Friday 27 February 2026 00:54:31 +0000 (0:00:00.576) 0:05:56.777 ******* 2026-02-27 01:00:41.329106 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:00:41.329110 | orchestrator | 2026-02-27 01:00:41.329114 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-27 01:00:41.329117 | orchestrator | Friday 27 February 2026 00:54:32 +0000 (0:00:00.861) 0:05:57.639 ******* 2026-02-27 01:00:41.329121 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.329125 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.329132 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.329136 | orchestrator | 2026-02-27 01:00:41.329140 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-27 01:00:41.329143 | orchestrator | Friday 27 February 2026 00:54:33 +0000 (0:00:00.768) 0:05:58.407 ******* 2026-02-27 01:00:41.329147 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.329151 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.329155 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.329158 | orchestrator | 2026-02-27 01:00:41.329162 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-27 01:00:41.329182 | orchestrator | Friday 27 February 2026 00:54:33 +0000 (0:00:00.323) 0:05:58.731 ******* 2026-02-27 01:00:41.329188 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.329194 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.329200 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.329205 | orchestrator | 2026-02-27 01:00:41.329215 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-27 01:00:41.329221 | orchestrator | Friday 27 February 2026 00:54:33 +0000 (0:00:00.629) 0:05:59.360 ******* 2026-02-27 01:00:41.329227 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.329232 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.329238 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.329244 | orchestrator | 2026-02-27 01:00:41.329250 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-27 01:00:41.329255 | orchestrator | Friday 27 February 2026 00:54:34 +0000 (0:00:00.330) 0:05:59.691 ******* 2026-02-27 01:00:41.329261 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.329266 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.329272 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.329278 | orchestrator | 2026-02-27 01:00:41.329284 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-27 01:00:41.329290 | orchestrator | Friday 27 February 2026 00:54:35 +0000 (0:00:00.776) 0:06:00.468 ******* 2026-02-27 01:00:41.329295 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.329301 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.329306 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.329312 | orchestrator | 2026-02-27 01:00:41.329318 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-27 01:00:41.329324 | orchestrator | Friday 27 February 2026 00:54:35 +0000 (0:00:00.396) 0:06:00.864 ******* 2026-02-27 01:00:41.329329 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.329334 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.329340 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.329345 | orchestrator | 2026-02-27 01:00:41.329351 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-27 01:00:41.329356 | orchestrator | Friday 27 February 2026 00:54:36 +0000 (0:00:00.574) 0:06:01.439 ******* 2026-02-27 01:00:41.329362 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.329367 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.329373 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.329379 | orchestrator | 2026-02-27 01:00:41.329384 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-27 01:00:41.329390 | orchestrator | Friday 27 February 2026 00:54:36 +0000 (0:00:00.743) 0:06:02.182 ******* 2026-02-27 01:00:41.329396 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.329402 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.329407 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.329413 | orchestrator | 2026-02-27 01:00:41.329418 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-27 01:00:41.329424 | orchestrator | Friday 27 February 2026 00:54:37 +0000 (0:00:00.862) 0:06:03.045 ******* 2026-02-27 01:00:41.329430 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.329436 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.329441 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.329447 | orchestrator | 2026-02-27 01:00:41.329458 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-27 01:00:41.329463 | orchestrator | Friday 27 February 2026 00:54:38 +0000 (0:00:00.355) 0:06:03.400 ******* 2026-02-27 01:00:41.329468 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.329474 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.329479 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.329485 | orchestrator | 2026-02-27 01:00:41.329490 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-27 01:00:41.329496 | orchestrator | Friday 27 February 2026 00:54:38 +0000 (0:00:00.626) 0:06:04.027 ******* 2026-02-27 01:00:41.329502 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.329507 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.329513 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.329518 | orchestrator | 2026-02-27 01:00:41.329524 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-27 01:00:41.329534 | orchestrator | Friday 27 February 2026 00:54:38 +0000 (0:00:00.353) 0:06:04.380 ******* 2026-02-27 01:00:41.329540 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.329545 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.329551 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.329557 | orchestrator | 2026-02-27 01:00:41.329562 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-27 01:00:41.329568 | orchestrator | Friday 27 February 2026 00:54:39 +0000 (0:00:00.349) 0:06:04.729 ******* 2026-02-27 01:00:41.329573 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.329579 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.329584 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.329590 | orchestrator | 2026-02-27 01:00:41.329596 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-27 01:00:41.329602 | orchestrator | Friday 27 February 2026 00:54:39 +0000 (0:00:00.317) 0:06:05.047 ******* 2026-02-27 01:00:41.329607 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.329613 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.329618 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.329624 | orchestrator | 2026-02-27 01:00:41.329630 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-27 01:00:41.329635 | orchestrator | Friday 27 February 2026 00:54:39 +0000 (0:00:00.305) 0:06:05.353 ******* 2026-02-27 01:00:41.329641 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.329646 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.329652 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.329658 | orchestrator | 2026-02-27 01:00:41.329664 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-27 01:00:41.329669 | orchestrator | Friday 27 February 2026 00:54:40 +0000 (0:00:00.599) 0:06:05.952 ******* 2026-02-27 01:00:41.329674 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.329680 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.329686 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.329692 | orchestrator | 2026-02-27 01:00:41.329698 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-27 01:00:41.329703 | orchestrator | Friday 27 February 2026 00:54:40 +0000 (0:00:00.325) 0:06:06.278 ******* 2026-02-27 01:00:41.329709 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.329716 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.329722 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.329727 | orchestrator | 2026-02-27 01:00:41.329733 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-27 01:00:41.329744 | orchestrator | Friday 27 February 2026 00:54:41 +0000 (0:00:00.356) 0:06:06.635 ******* 2026-02-27 01:00:41.329750 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.329756 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.329761 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.329767 | orchestrator | 2026-02-27 01:00:41.329772 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-27 01:00:41.329785 | orchestrator | Friday 27 February 2026 00:54:42 +0000 (0:00:00.813) 0:06:07.448 ******* 2026-02-27 01:00:41.329791 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-27 01:00:41.329796 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-27 01:00:41.329803 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-27 01:00:41.329809 | orchestrator | 2026-02-27 01:00:41.329814 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-27 01:00:41.329820 | orchestrator | Friday 27 February 2026 00:54:42 +0000 (0:00:00.649) 0:06:08.098 ******* 2026-02-27 01:00:41.329826 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:00:41.329833 | orchestrator | 2026-02-27 01:00:41.329839 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-27 01:00:41.329845 | orchestrator | Friday 27 February 2026 00:54:43 +0000 (0:00:00.575) 0:06:08.674 ******* 2026-02-27 01:00:41.329851 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:00:41.329858 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:00:41.329864 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:00:41.329870 | orchestrator | 2026-02-27 01:00:41.329876 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-27 01:00:41.329883 | orchestrator | Friday 27 February 2026 00:54:43 +0000 (0:00:00.672) 0:06:09.347 ******* 2026-02-27 01:00:41.329889 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.329895 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.329900 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.329907 | orchestrator | 2026-02-27 01:00:41.329913 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-27 01:00:41.329917 | orchestrator | Friday 27 February 2026 00:54:44 +0000 (0:00:00.575) 0:06:09.923 ******* 2026-02-27 01:00:41.329920 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-27 01:00:41.329925 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-27 01:00:41.329928 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-27 01:00:41.329932 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-02-27 01:00:41.329936 | orchestrator | 2026-02-27 01:00:41.329940 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-27 01:00:41.329944 | orchestrator | Friday 27 February 2026 00:54:55 +0000 (0:00:10.687) 0:06:20.610 ******* 2026-02-27 01:00:41.329947 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.329951 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.329955 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.329958 | orchestrator | 2026-02-27 01:00:41.329962 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-27 01:00:41.329966 | orchestrator | Friday 27 February 2026 00:54:55 +0000 (0:00:00.402) 0:06:21.012 ******* 2026-02-27 01:00:41.329970 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-27 01:00:41.329973 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-27 01:00:41.329977 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-27 01:00:41.329981 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-27 01:00:41.329985 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-27 01:00:41.329994 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-27 01:00:41.329997 | orchestrator | 2026-02-27 01:00:41.330001 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-27 01:00:41.330005 | orchestrator | Friday 27 February 2026 00:54:58 +0000 (0:00:02.524) 0:06:23.536 ******* 2026-02-27 01:00:41.330008 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-27 01:00:41.330034 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-27 01:00:41.330038 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-27 01:00:41.330042 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-27 01:00:41.330051 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-02-27 01:00:41.330055 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-27 01:00:41.330062 | orchestrator | 2026-02-27 01:00:41.330066 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-27 01:00:41.330070 | orchestrator | Friday 27 February 2026 00:54:59 +0000 (0:00:01.304) 0:06:24.841 ******* 2026-02-27 01:00:41.330074 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.330078 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.330081 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.330085 | orchestrator | 2026-02-27 01:00:41.330089 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-27 01:00:41.330093 | orchestrator | Friday 27 February 2026 00:55:00 +0000 (0:00:01.168) 0:06:26.009 ******* 2026-02-27 01:00:41.330097 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.330100 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.330104 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.330108 | orchestrator | 2026-02-27 01:00:41.330112 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-27 01:00:41.330116 | orchestrator | Friday 27 February 2026 00:55:01 +0000 (0:00:00.433) 0:06:26.443 ******* 2026-02-27 01:00:41.330120 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.330124 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.330128 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.330132 | orchestrator | 2026-02-27 01:00:41.330135 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-27 01:00:41.330139 | orchestrator | Friday 27 February 2026 00:55:01 +0000 (0:00:00.411) 0:06:26.854 ******* 2026-02-27 01:00:41.330147 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:00:41.330151 | orchestrator | 2026-02-27 01:00:41.330154 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-27 01:00:41.330158 | orchestrator | Friday 27 February 2026 00:55:02 +0000 (0:00:00.868) 0:06:27.723 ******* 2026-02-27 01:00:41.330162 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.330204 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.330210 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.330216 | orchestrator | 2026-02-27 01:00:41.330222 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-27 01:00:41.330227 | orchestrator | Friday 27 February 2026 00:55:02 +0000 (0:00:00.370) 0:06:28.094 ******* 2026-02-27 01:00:41.330234 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.330239 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.330245 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.330251 | orchestrator | 2026-02-27 01:00:41.330257 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-27 01:00:41.330263 | orchestrator | Friday 27 February 2026 00:55:03 +0000 (0:00:00.350) 0:06:28.444 ******* 2026-02-27 01:00:41.330269 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:00:41.330274 | orchestrator | 2026-02-27 01:00:41.330279 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-27 01:00:41.330285 | orchestrator | Friday 27 February 2026 00:55:03 +0000 (0:00:00.826) 0:06:29.270 ******* 2026-02-27 01:00:41.330291 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:00:41.330296 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:00:41.330301 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:00:41.330307 | orchestrator | 2026-02-27 01:00:41.330313 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-27 01:00:41.330318 | orchestrator | Friday 27 February 2026 00:55:05 +0000 (0:00:01.402) 0:06:30.672 ******* 2026-02-27 01:00:41.330324 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:00:41.330330 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:00:41.330335 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:00:41.330351 | orchestrator | 2026-02-27 01:00:41.330357 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-27 01:00:41.330363 | orchestrator | Friday 27 February 2026 00:55:06 +0000 (0:00:01.251) 0:06:31.923 ******* 2026-02-27 01:00:41.330369 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:00:41.330375 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:00:41.330381 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:00:41.330386 | orchestrator | 2026-02-27 01:00:41.330392 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-27 01:00:41.330398 | orchestrator | Friday 27 February 2026 00:55:08 +0000 (0:00:01.922) 0:06:33.846 ******* 2026-02-27 01:00:41.330404 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:00:41.330411 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:00:41.330416 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:00:41.330422 | orchestrator | 2026-02-27 01:00:41.330427 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-27 01:00:41.330433 | orchestrator | Friday 27 February 2026 00:55:10 +0000 (0:00:02.364) 0:06:36.210 ******* 2026-02-27 01:00:41.330438 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.330444 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.330450 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-02-27 01:00:41.330455 | orchestrator | 2026-02-27 01:00:41.330461 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-02-27 01:00:41.330466 | orchestrator | Friday 27 February 2026 00:55:11 +0000 (0:00:00.432) 0:06:36.642 ******* 2026-02-27 01:00:41.330489 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-02-27 01:00:41.330496 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-02-27 01:00:41.330501 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-02-27 01:00:41.330505 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-02-27 01:00:41.330509 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-02-27 01:00:41.330513 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-27 01:00:41.330517 | orchestrator | 2026-02-27 01:00:41.330520 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-02-27 01:00:41.330524 | orchestrator | Friday 27 February 2026 00:55:41 +0000 (0:00:30.558) 0:07:07.201 ******* 2026-02-27 01:00:41.330528 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-27 01:00:41.330531 | orchestrator | 2026-02-27 01:00:41.330535 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-02-27 01:00:41.330539 | orchestrator | Friday 27 February 2026 00:55:43 +0000 (0:00:01.317) 0:07:08.518 ******* 2026-02-27 01:00:41.330543 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.330547 | orchestrator | 2026-02-27 01:00:41.330551 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-02-27 01:00:41.330554 | orchestrator | Friday 27 February 2026 00:55:43 +0000 (0:00:00.329) 0:07:08.848 ******* 2026-02-27 01:00:41.330558 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.330562 | orchestrator | 2026-02-27 01:00:41.330565 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-02-27 01:00:41.330569 | orchestrator | Friday 27 February 2026 00:55:43 +0000 (0:00:00.156) 0:07:09.004 ******* 2026-02-27 01:00:41.330573 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-02-27 01:00:41.330577 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-02-27 01:00:41.330585 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-02-27 01:00:41.330588 | orchestrator | 2026-02-27 01:00:41.330592 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-02-27 01:00:41.330600 | orchestrator | Friday 27 February 2026 00:55:51 +0000 (0:00:07.609) 0:07:16.614 ******* 2026-02-27 01:00:41.330604 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-02-27 01:00:41.330608 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-02-27 01:00:41.330612 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-02-27 01:00:41.330615 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-02-27 01:00:41.330619 | orchestrator | 2026-02-27 01:00:41.330623 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-27 01:00:41.330627 | orchestrator | Friday 27 February 2026 00:55:56 +0000 (0:00:05.199) 0:07:21.814 ******* 2026-02-27 01:00:41.330630 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:00:41.330634 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:00:41.330638 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:00:41.330642 | orchestrator | 2026-02-27 01:00:41.330645 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-27 01:00:41.330649 | orchestrator | Friday 27 February 2026 00:55:57 +0000 (0:00:00.698) 0:07:22.513 ******* 2026-02-27 01:00:41.330653 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:00:41.330656 | orchestrator | 2026-02-27 01:00:41.330660 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-02-27 01:00:41.330664 | orchestrator | Friday 27 February 2026 00:55:57 +0000 (0:00:00.787) 0:07:23.300 ******* 2026-02-27 01:00:41.330667 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.330671 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.330675 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.330679 | orchestrator | 2026-02-27 01:00:41.330682 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-02-27 01:00:41.330686 | orchestrator | Friday 27 February 2026 00:55:58 +0000 (0:00:00.361) 0:07:23.662 ******* 2026-02-27 01:00:41.330690 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:00:41.330693 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:00:41.330697 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:00:41.330701 | orchestrator | 2026-02-27 01:00:41.330704 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-02-27 01:00:41.330708 | orchestrator | Friday 27 February 2026 00:55:59 +0000 (0:00:01.349) 0:07:25.011 ******* 2026-02-27 01:00:41.330712 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-27 01:00:41.330715 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-27 01:00:41.330719 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-27 01:00:41.330723 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.330727 | orchestrator | 2026-02-27 01:00:41.330730 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-02-27 01:00:41.330734 | orchestrator | Friday 27 February 2026 00:56:00 +0000 (0:00:00.968) 0:07:25.979 ******* 2026-02-27 01:00:41.330738 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.330742 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.330745 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.330749 | orchestrator | 2026-02-27 01:00:41.330753 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-02-27 01:00:41.330756 | orchestrator | 2026-02-27 01:00:41.330760 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-27 01:00:41.330764 | orchestrator | Friday 27 February 2026 00:56:01 +0000 (0:00:00.873) 0:07:26.853 ******* 2026-02-27 01:00:41.330771 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-27 01:00:41.330777 | orchestrator | 2026-02-27 01:00:41.330780 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-27 01:00:41.330784 | orchestrator | Friday 27 February 2026 00:56:02 +0000 (0:00:00.639) 0:07:27.493 ******* 2026-02-27 01:00:41.330792 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-27 01:00:41.330796 | orchestrator | 2026-02-27 01:00:41.330799 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-27 01:00:41.330803 | orchestrator | Friday 27 February 2026 00:56:02 +0000 (0:00:00.798) 0:07:28.291 ******* 2026-02-27 01:00:41.330807 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.330810 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.330814 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.330818 | orchestrator | 2026-02-27 01:00:41.330821 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-27 01:00:41.330825 | orchestrator | Friday 27 February 2026 00:56:03 +0000 (0:00:00.347) 0:07:28.639 ******* 2026-02-27 01:00:41.330829 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.330833 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.330836 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.330840 | orchestrator | 2026-02-27 01:00:41.330844 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-27 01:00:41.330847 | orchestrator | Friday 27 February 2026 00:56:04 +0000 (0:00:00.765) 0:07:29.405 ******* 2026-02-27 01:00:41.330851 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.330855 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.330858 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.330862 | orchestrator | 2026-02-27 01:00:41.330866 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-27 01:00:41.330870 | orchestrator | Friday 27 February 2026 00:56:04 +0000 (0:00:00.808) 0:07:30.213 ******* 2026-02-27 01:00:41.330873 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.330877 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.330881 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.330884 | orchestrator | 2026-02-27 01:00:41.330888 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-27 01:00:41.330895 | orchestrator | Friday 27 February 2026 00:56:05 +0000 (0:00:01.080) 0:07:31.293 ******* 2026-02-27 01:00:41.330899 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.330902 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.330906 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.330910 | orchestrator | 2026-02-27 01:00:41.330913 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-27 01:00:41.330917 | orchestrator | Friday 27 February 2026 00:56:06 +0000 (0:00:00.330) 0:07:31.624 ******* 2026-02-27 01:00:41.330921 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.330925 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.330928 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.330932 | orchestrator | 2026-02-27 01:00:41.330936 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-27 01:00:41.330939 | orchestrator | Friday 27 February 2026 00:56:06 +0000 (0:00:00.335) 0:07:31.959 ******* 2026-02-27 01:00:41.330943 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.330947 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.330951 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.330954 | orchestrator | 2026-02-27 01:00:41.330958 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-27 01:00:41.330962 | orchestrator | Friday 27 February 2026 00:56:06 +0000 (0:00:00.318) 0:07:32.278 ******* 2026-02-27 01:00:41.330965 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.330969 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.330973 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.330976 | orchestrator | 2026-02-27 01:00:41.330980 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-27 01:00:41.330984 | orchestrator | Friday 27 February 2026 00:56:07 +0000 (0:00:01.072) 0:07:33.350 ******* 2026-02-27 01:00:41.330988 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.330996 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.330999 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.331003 | orchestrator | 2026-02-27 01:00:41.331007 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-27 01:00:41.331010 | orchestrator | Friday 27 February 2026 00:56:08 +0000 (0:00:00.753) 0:07:34.104 ******* 2026-02-27 01:00:41.331014 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.331018 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.331022 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.331025 | orchestrator | 2026-02-27 01:00:41.331029 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-27 01:00:41.331033 | orchestrator | Friday 27 February 2026 00:56:09 +0000 (0:00:00.422) 0:07:34.527 ******* 2026-02-27 01:00:41.331036 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.331040 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.331044 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.331047 | orchestrator | 2026-02-27 01:00:41.331051 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-27 01:00:41.331055 | orchestrator | Friday 27 February 2026 00:56:09 +0000 (0:00:00.405) 0:07:34.932 ******* 2026-02-27 01:00:41.331058 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.331062 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.331066 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.331070 | orchestrator | 2026-02-27 01:00:41.331073 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-27 01:00:41.331077 | orchestrator | Friday 27 February 2026 00:56:10 +0000 (0:00:00.589) 0:07:35.521 ******* 2026-02-27 01:00:41.331081 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.331084 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.331088 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.331092 | orchestrator | 2026-02-27 01:00:41.331096 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-27 01:00:41.331099 | orchestrator | Friday 27 February 2026 00:56:10 +0000 (0:00:00.404) 0:07:35.926 ******* 2026-02-27 01:00:41.331103 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.331107 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.331114 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.331118 | orchestrator | 2026-02-27 01:00:41.331121 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-27 01:00:41.331125 | orchestrator | Friday 27 February 2026 00:56:11 +0000 (0:00:00.488) 0:07:36.415 ******* 2026-02-27 01:00:41.331129 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.331133 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.331136 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.331140 | orchestrator | 2026-02-27 01:00:41.331144 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-27 01:00:41.331148 | orchestrator | Friday 27 February 2026 00:56:11 +0000 (0:00:00.432) 0:07:36.848 ******* 2026-02-27 01:00:41.331151 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.331155 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.331159 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.331178 | orchestrator | 2026-02-27 01:00:41.331184 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-27 01:00:41.331191 | orchestrator | Friday 27 February 2026 00:56:12 +0000 (0:00:00.581) 0:07:37.430 ******* 2026-02-27 01:00:41.331196 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.331202 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.331208 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.331214 | orchestrator | 2026-02-27 01:00:41.331220 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-27 01:00:41.331225 | orchestrator | Friday 27 February 2026 00:56:12 +0000 (0:00:00.316) 0:07:37.746 ******* 2026-02-27 01:00:41.331231 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.331236 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.331242 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.331256 | orchestrator | 2026-02-27 01:00:41.331262 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-27 01:00:41.331268 | orchestrator | Friday 27 February 2026 00:56:12 +0000 (0:00:00.360) 0:07:38.107 ******* 2026-02-27 01:00:41.331274 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.331280 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.331285 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.331291 | orchestrator | 2026-02-27 01:00:41.331297 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-27 01:00:41.331302 | orchestrator | Friday 27 February 2026 00:56:13 +0000 (0:00:00.822) 0:07:38.930 ******* 2026-02-27 01:00:41.331311 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.331317 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.331323 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.331328 | orchestrator | 2026-02-27 01:00:41.331333 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-27 01:00:41.331339 | orchestrator | Friday 27 February 2026 00:56:13 +0000 (0:00:00.371) 0:07:39.302 ******* 2026-02-27 01:00:41.331344 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-27 01:00:41.331350 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-27 01:00:41.331355 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-27 01:00:41.331361 | orchestrator | 2026-02-27 01:00:41.331367 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-27 01:00:41.331374 | orchestrator | Friday 27 February 2026 00:56:14 +0000 (0:00:00.612) 0:07:39.914 ******* 2026-02-27 01:00:41.331379 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-27 01:00:41.331385 | orchestrator | 2026-02-27 01:00:41.331390 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-27 01:00:41.331396 | orchestrator | Friday 27 February 2026 00:56:15 +0000 (0:00:00.555) 0:07:40.469 ******* 2026-02-27 01:00:41.331401 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.331407 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.331413 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.331419 | orchestrator | 2026-02-27 01:00:41.331425 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-27 01:00:41.331431 | orchestrator | Friday 27 February 2026 00:56:15 +0000 (0:00:00.617) 0:07:41.087 ******* 2026-02-27 01:00:41.331436 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.331442 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.331447 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.331453 | orchestrator | 2026-02-27 01:00:41.331459 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-27 01:00:41.331465 | orchestrator | Friday 27 February 2026 00:56:16 +0000 (0:00:00.315) 0:07:41.402 ******* 2026-02-27 01:00:41.331471 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.331476 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.331482 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.331488 | orchestrator | 2026-02-27 01:00:41.331494 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-27 01:00:41.331500 | orchestrator | Friday 27 February 2026 00:56:16 +0000 (0:00:00.747) 0:07:42.150 ******* 2026-02-27 01:00:41.331506 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.331512 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.331517 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.331522 | orchestrator | 2026-02-27 01:00:41.331528 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-27 01:00:41.331534 | orchestrator | Friday 27 February 2026 00:56:17 +0000 (0:00:00.369) 0:07:42.519 ******* 2026-02-27 01:00:41.331539 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-27 01:00:41.331545 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-27 01:00:41.331556 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-27 01:00:41.331562 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-27 01:00:41.331567 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-27 01:00:41.331577 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-27 01:00:41.331583 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-27 01:00:41.331589 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-27 01:00:41.331596 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-27 01:00:41.331602 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-27 01:00:41.331608 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-27 01:00:41.331614 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-27 01:00:41.331620 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-27 01:00:41.331626 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-27 01:00:41.331632 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-27 01:00:41.331638 | orchestrator | 2026-02-27 01:00:41.331645 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-27 01:00:41.331649 | orchestrator | Friday 27 February 2026 00:56:21 +0000 (0:00:04.175) 0:07:46.695 ******* 2026-02-27 01:00:41.331652 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.331656 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.331660 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.331664 | orchestrator | 2026-02-27 01:00:41.331667 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-27 01:00:41.331671 | orchestrator | Friday 27 February 2026 00:56:21 +0000 (0:00:00.363) 0:07:47.059 ******* 2026-02-27 01:00:41.331675 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-27 01:00:41.331678 | orchestrator | 2026-02-27 01:00:41.331682 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-27 01:00:41.331689 | orchestrator | Friday 27 February 2026 00:56:22 +0000 (0:00:00.538) 0:07:47.597 ******* 2026-02-27 01:00:41.331693 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-27 01:00:41.331697 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-27 01:00:41.331701 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-02-27 01:00:41.331704 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-02-27 01:00:41.331708 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-27 01:00:41.331712 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-02-27 01:00:41.331716 | orchestrator | 2026-02-27 01:00:41.331720 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-27 01:00:41.331723 | orchestrator | Friday 27 February 2026 00:56:23 +0000 (0:00:01.351) 0:07:48.949 ******* 2026-02-27 01:00:41.331727 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-27 01:00:41.331731 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-27 01:00:41.331735 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-27 01:00:41.331738 | orchestrator | 2026-02-27 01:00:41.331742 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-27 01:00:41.331746 | orchestrator | Friday 27 February 2026 00:56:25 +0000 (0:00:02.282) 0:07:51.231 ******* 2026-02-27 01:00:41.331753 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-27 01:00:41.331757 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-27 01:00:41.331761 | orchestrator | changed: [testbed-node-3] 2026-02-27 01:00:41.331765 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-27 01:00:41.331769 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-27 01:00:41.331772 | orchestrator | changed: [testbed-node-4] 2026-02-27 01:00:41.331776 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-27 01:00:41.331780 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-27 01:00:41.331784 | orchestrator | changed: [testbed-node-5] 2026-02-27 01:00:41.331787 | orchestrator | 2026-02-27 01:00:41.331791 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-27 01:00:41.331795 | orchestrator | Friday 27 February 2026 00:56:27 +0000 (0:00:01.314) 0:07:52.546 ******* 2026-02-27 01:00:41.331798 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-27 01:00:41.331802 | orchestrator | 2026-02-27 01:00:41.331806 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-27 01:00:41.331809 | orchestrator | Friday 27 February 2026 00:56:29 +0000 (0:00:02.218) 0:07:54.764 ******* 2026-02-27 01:00:41.331813 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-27 01:00:41.331817 | orchestrator | 2026-02-27 01:00:41.331821 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-02-27 01:00:41.331824 | orchestrator | Friday 27 February 2026 00:56:30 +0000 (0:00:00.907) 0:07:55.672 ******* 2026-02-27 01:00:41.331828 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3', 'data_vg': 'ceph-aa250c28-8715-5ad9-8f6a-4b8a4568e8d3'}) 2026-02-27 01:00:41.331834 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a', 'data_vg': 'ceph-c5e6c545-43c0-5a5e-9b6e-24e5d5043e2a'}) 2026-02-27 01:00:41.331841 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5630d52f-55a8-52f3-8c7d-90d730eab2c2', 'data_vg': 'ceph-5630d52f-55a8-52f3-8c7d-90d730eab2c2'}) 2026-02-27 01:00:41.331845 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-91c1f24e-fd77-555b-b1fb-5152ae0ce974', 'data_vg': 'ceph-91c1f24e-fd77-555b-b1fb-5152ae0ce974'}) 2026-02-27 01:00:41.331849 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-15e091ae-77f4-5dd5-92b2-2aa74778b61d', 'data_vg': 'ceph-15e091ae-77f4-5dd5-92b2-2aa74778b61d'}) 2026-02-27 01:00:41.331853 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-e90026b5-6780-5a31-9cea-c7916e7559fe', 'data_vg': 'ceph-e90026b5-6780-5a31-9cea-c7916e7559fe'}) 2026-02-27 01:00:41.331856 | orchestrator | 2026-02-27 01:00:41.331860 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-27 01:00:41.331864 | orchestrator | Friday 27 February 2026 00:57:11 +0000 (0:00:41.348) 0:08:37.021 ******* 2026-02-27 01:00:41.331868 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.331871 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.331875 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.331879 | orchestrator | 2026-02-27 01:00:41.331882 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-27 01:00:41.331886 | orchestrator | Friday 27 February 2026 00:57:12 +0000 (0:00:00.479) 0:08:37.501 ******* 2026-02-27 01:00:41.331890 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-27 01:00:41.331894 | orchestrator | 2026-02-27 01:00:41.331897 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-27 01:00:41.331901 | orchestrator | Friday 27 February 2026 00:57:12 +0000 (0:00:00.810) 0:08:38.311 ******* 2026-02-27 01:00:41.331905 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.331909 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.331914 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.331924 | orchestrator | 2026-02-27 01:00:41.331930 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-27 01:00:41.331935 | orchestrator | Friday 27 February 2026 00:57:13 +0000 (0:00:00.708) 0:08:39.019 ******* 2026-02-27 01:00:41.331941 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.331950 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.331956 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.331961 | orchestrator | 2026-02-27 01:00:41.331968 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-27 01:00:41.331973 | orchestrator | Friday 27 February 2026 00:57:16 +0000 (0:00:02.753) 0:08:41.773 ******* 2026-02-27 01:00:41.331979 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-27 01:00:41.331984 | orchestrator | 2026-02-27 01:00:41.331990 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-27 01:00:41.331996 | orchestrator | Friday 27 February 2026 00:57:17 +0000 (0:00:00.817) 0:08:42.591 ******* 2026-02-27 01:00:41.332002 | orchestrator | changed: [testbed-node-3] 2026-02-27 01:00:41.332008 | orchestrator | changed: [testbed-node-4] 2026-02-27 01:00:41.332014 | orchestrator | changed: [testbed-node-5] 2026-02-27 01:00:41.332020 | orchestrator | 2026-02-27 01:00:41.332026 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-27 01:00:41.332031 | orchestrator | Friday 27 February 2026 00:57:18 +0000 (0:00:01.309) 0:08:43.901 ******* 2026-02-27 01:00:41.332037 | orchestrator | changed: [testbed-node-3] 2026-02-27 01:00:41.332044 | orchestrator | changed: [testbed-node-4] 2026-02-27 01:00:41.332048 | orchestrator | changed: [testbed-node-5] 2026-02-27 01:00:41.332053 | orchestrator | 2026-02-27 01:00:41.332059 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-27 01:00:41.332065 | orchestrator | Friday 27 February 2026 00:57:19 +0000 (0:00:01.243) 0:08:45.145 ******* 2026-02-27 01:00:41.332071 | orchestrator | changed: [testbed-node-3] 2026-02-27 01:00:41.332077 | orchestrator | changed: [testbed-node-5] 2026-02-27 01:00:41.332083 | orchestrator | changed: [testbed-node-4] 2026-02-27 01:00:41.332089 | orchestrator | 2026-02-27 01:00:41.332095 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-27 01:00:41.332102 | orchestrator | Friday 27 February 2026 00:57:21 +0000 (0:00:02.150) 0:08:47.295 ******* 2026-02-27 01:00:41.332108 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.332114 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.332120 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.332127 | orchestrator | 2026-02-27 01:00:41.332133 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-27 01:00:41.332140 | orchestrator | Friday 27 February 2026 00:57:22 +0000 (0:00:00.716) 0:08:48.012 ******* 2026-02-27 01:00:41.332144 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.332147 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.332151 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.332155 | orchestrator | 2026-02-27 01:00:41.332159 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-27 01:00:41.332199 | orchestrator | Friday 27 February 2026 00:57:23 +0000 (0:00:00.466) 0:08:48.478 ******* 2026-02-27 01:00:41.332207 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-27 01:00:41.332214 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-02-27 01:00:41.332220 | orchestrator | ok: [testbed-node-5] => (item=5) 2026-02-27 01:00:41.332225 | orchestrator | ok: [testbed-node-3] => (item=4) 2026-02-27 01:00:41.332229 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-02-27 01:00:41.332233 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-02-27 01:00:41.332236 | orchestrator | 2026-02-27 01:00:41.332240 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-27 01:00:41.332244 | orchestrator | Friday 27 February 2026 00:57:24 +0000 (0:00:01.054) 0:08:49.532 ******* 2026-02-27 01:00:41.332248 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-27 01:00:41.332257 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-02-27 01:00:41.332261 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-02-27 01:00:41.332264 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-02-27 01:00:41.332272 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-02-27 01:00:41.332276 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-27 01:00:41.332280 | orchestrator | 2026-02-27 01:00:41.332284 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-27 01:00:41.332287 | orchestrator | Friday 27 February 2026 00:57:26 +0000 (0:00:02.352) 0:08:51.885 ******* 2026-02-27 01:00:41.332291 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-27 01:00:41.332295 | orchestrator | changed: [testbed-node-5] => (item=5) 2026-02-27 01:00:41.332298 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-02-27 01:00:41.332302 | orchestrator | changed: [testbed-node-3] => (item=4) 2026-02-27 01:00:41.332306 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-27 01:00:41.332310 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-02-27 01:00:41.332313 | orchestrator | 2026-02-27 01:00:41.332317 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-27 01:00:41.332321 | orchestrator | Friday 27 February 2026 00:57:30 +0000 (0:00:04.199) 0:08:56.084 ******* 2026-02-27 01:00:41.332324 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.332328 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.332332 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-27 01:00:41.332336 | orchestrator | 2026-02-27 01:00:41.332339 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-27 01:00:41.332343 | orchestrator | Friday 27 February 2026 00:57:33 +0000 (0:00:03.276) 0:08:59.361 ******* 2026-02-27 01:00:41.332347 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.332350 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.332354 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-02-27 01:00:41.332358 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-27 01:00:41.332362 | orchestrator | 2026-02-27 01:00:41.332365 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-27 01:00:41.332369 | orchestrator | Friday 27 February 2026 00:57:46 +0000 (0:00:12.589) 0:09:11.951 ******* 2026-02-27 01:00:41.332373 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.332376 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.332380 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.332384 | orchestrator | 2026-02-27 01:00:41.332391 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-27 01:00:41.332394 | orchestrator | Friday 27 February 2026 00:57:47 +0000 (0:00:01.129) 0:09:13.080 ******* 2026-02-27 01:00:41.332398 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.332402 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.332406 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.332409 | orchestrator | 2026-02-27 01:00:41.332413 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-27 01:00:41.332417 | orchestrator | Friday 27 February 2026 00:57:48 +0000 (0:00:00.349) 0:09:13.430 ******* 2026-02-27 01:00:41.332421 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-27 01:00:41.332424 | orchestrator | 2026-02-27 01:00:41.332428 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-02-27 01:00:41.332432 | orchestrator | Friday 27 February 2026 00:57:48 +0000 (0:00:00.842) 0:09:14.273 ******* 2026-02-27 01:00:41.332435 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-27 01:00:41.332439 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-27 01:00:41.332443 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-27 01:00:41.332446 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.332454 | orchestrator | 2026-02-27 01:00:41.332457 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-02-27 01:00:41.332461 | orchestrator | Friday 27 February 2026 00:57:49 +0000 (0:00:00.455) 0:09:14.728 ******* 2026-02-27 01:00:41.332465 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.332469 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.332472 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.332476 | orchestrator | 2026-02-27 01:00:41.332480 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-02-27 01:00:41.332483 | orchestrator | Friday 27 February 2026 00:57:49 +0000 (0:00:00.328) 0:09:15.057 ******* 2026-02-27 01:00:41.332487 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.332491 | orchestrator | 2026-02-27 01:00:41.332494 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-02-27 01:00:41.332498 | orchestrator | Friday 27 February 2026 00:57:49 +0000 (0:00:00.242) 0:09:15.299 ******* 2026-02-27 01:00:41.332502 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.332505 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.332509 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.332513 | orchestrator | 2026-02-27 01:00:41.332516 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-02-27 01:00:41.332520 | orchestrator | Friday 27 February 2026 00:57:50 +0000 (0:00:00.361) 0:09:15.660 ******* 2026-02-27 01:00:41.332524 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.332528 | orchestrator | 2026-02-27 01:00:41.332531 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-02-27 01:00:41.332535 | orchestrator | Friday 27 February 2026 00:57:50 +0000 (0:00:00.240) 0:09:15.901 ******* 2026-02-27 01:00:41.332539 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.332542 | orchestrator | 2026-02-27 01:00:41.332546 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-02-27 01:00:41.332550 | orchestrator | Friday 27 February 2026 00:57:50 +0000 (0:00:00.294) 0:09:16.196 ******* 2026-02-27 01:00:41.332553 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.332557 | orchestrator | 2026-02-27 01:00:41.332561 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-02-27 01:00:41.332565 | orchestrator | Friday 27 February 2026 00:57:50 +0000 (0:00:00.170) 0:09:16.366 ******* 2026-02-27 01:00:41.332568 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.332572 | orchestrator | 2026-02-27 01:00:41.332578 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-02-27 01:00:41.332582 | orchestrator | Friday 27 February 2026 00:57:51 +0000 (0:00:00.993) 0:09:17.360 ******* 2026-02-27 01:00:41.332586 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.332590 | orchestrator | 2026-02-27 01:00:41.332593 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-02-27 01:00:41.332597 | orchestrator | Friday 27 February 2026 00:57:52 +0000 (0:00:00.265) 0:09:17.626 ******* 2026-02-27 01:00:41.332601 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-27 01:00:41.332604 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-27 01:00:41.332608 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-27 01:00:41.332612 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.332616 | orchestrator | 2026-02-27 01:00:41.332619 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-02-27 01:00:41.332623 | orchestrator | Friday 27 February 2026 00:57:52 +0000 (0:00:00.436) 0:09:18.063 ******* 2026-02-27 01:00:41.332627 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.332630 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.332634 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.332638 | orchestrator | 2026-02-27 01:00:41.332642 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-02-27 01:00:41.332645 | orchestrator | Friday 27 February 2026 00:57:53 +0000 (0:00:00.364) 0:09:18.427 ******* 2026-02-27 01:00:41.332653 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.332657 | orchestrator | 2026-02-27 01:00:41.332661 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-02-27 01:00:41.332665 | orchestrator | Friday 27 February 2026 00:57:53 +0000 (0:00:00.257) 0:09:18.684 ******* 2026-02-27 01:00:41.332668 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.332672 | orchestrator | 2026-02-27 01:00:41.332676 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-02-27 01:00:41.332679 | orchestrator | 2026-02-27 01:00:41.332683 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-27 01:00:41.332687 | orchestrator | Friday 27 February 2026 00:57:54 +0000 (0:00:00.983) 0:09:19.667 ******* 2026-02-27 01:00:41.332694 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:00:41.332700 | orchestrator | 2026-02-27 01:00:41.332703 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-27 01:00:41.332707 | orchestrator | Friday 27 February 2026 00:57:55 +0000 (0:00:01.467) 0:09:21.135 ******* 2026-02-27 01:00:41.332711 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:00:41.332715 | orchestrator | 2026-02-27 01:00:41.332719 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-27 01:00:41.332723 | orchestrator | Friday 27 February 2026 00:57:57 +0000 (0:00:01.399) 0:09:22.535 ******* 2026-02-27 01:00:41.332726 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.332730 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.332734 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.332737 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.332741 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.332745 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.332749 | orchestrator | 2026-02-27 01:00:41.332752 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-27 01:00:41.332756 | orchestrator | Friday 27 February 2026 00:57:58 +0000 (0:00:01.109) 0:09:23.645 ******* 2026-02-27 01:00:41.332760 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.332763 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.332767 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.332771 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.332774 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.332778 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.332782 | orchestrator | 2026-02-27 01:00:41.332786 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-27 01:00:41.332789 | orchestrator | Friday 27 February 2026 00:57:58 +0000 (0:00:00.734) 0:09:24.379 ******* 2026-02-27 01:00:41.332793 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.332797 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.332800 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.332804 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.332808 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.332811 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.332815 | orchestrator | 2026-02-27 01:00:41.332819 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-27 01:00:41.332823 | orchestrator | Friday 27 February 2026 00:58:00 +0000 (0:00:01.066) 0:09:25.445 ******* 2026-02-27 01:00:41.332826 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.332830 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.332834 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.332837 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.332841 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.332845 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.332849 | orchestrator | 2026-02-27 01:00:41.332852 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-27 01:00:41.332860 | orchestrator | Friday 27 February 2026 00:58:00 +0000 (0:00:00.733) 0:09:26.178 ******* 2026-02-27 01:00:41.332863 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.332867 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.332871 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.332874 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.332878 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.332882 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.332885 | orchestrator | 2026-02-27 01:00:41.332889 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-27 01:00:41.332893 | orchestrator | Friday 27 February 2026 00:58:02 +0000 (0:00:01.397) 0:09:27.576 ******* 2026-02-27 01:00:41.332897 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.332900 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.332907 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.332910 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.332914 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.332918 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.332922 | orchestrator | 2026-02-27 01:00:41.332925 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-27 01:00:41.332929 | orchestrator | Friday 27 February 2026 00:58:02 +0000 (0:00:00.620) 0:09:28.197 ******* 2026-02-27 01:00:41.332933 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.332936 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.332940 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.332944 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.332948 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.332951 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.332955 | orchestrator | 2026-02-27 01:00:41.332959 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-27 01:00:41.332962 | orchestrator | Friday 27 February 2026 00:58:03 +0000 (0:00:00.733) 0:09:28.930 ******* 2026-02-27 01:00:41.332966 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.332970 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.332973 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.332977 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.332981 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.332984 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.332988 | orchestrator | 2026-02-27 01:00:41.332992 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-27 01:00:41.332996 | orchestrator | Friday 27 February 2026 00:58:04 +0000 (0:00:01.011) 0:09:29.942 ******* 2026-02-27 01:00:41.332999 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.333003 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.333007 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.333010 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.333014 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.333018 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.333021 | orchestrator | 2026-02-27 01:00:41.333025 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-27 01:00:41.333029 | orchestrator | Friday 27 February 2026 00:58:05 +0000 (0:00:01.190) 0:09:31.132 ******* 2026-02-27 01:00:41.333033 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.333036 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.333040 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.333044 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.333050 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.333054 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.333058 | orchestrator | 2026-02-27 01:00:41.333062 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-27 01:00:41.333065 | orchestrator | Friday 27 February 2026 00:58:06 +0000 (0:00:00.479) 0:09:31.612 ******* 2026-02-27 01:00:41.333069 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.333073 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.333080 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.333084 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.333088 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.333091 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.333095 | orchestrator | 2026-02-27 01:00:41.333099 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-27 01:00:41.333102 | orchestrator | Friday 27 February 2026 00:58:06 +0000 (0:00:00.697) 0:09:32.309 ******* 2026-02-27 01:00:41.333106 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.333110 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.333114 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.333117 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.333121 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.333125 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.333128 | orchestrator | 2026-02-27 01:00:41.333132 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-27 01:00:41.333136 | orchestrator | Friday 27 February 2026 00:58:07 +0000 (0:00:00.700) 0:09:33.010 ******* 2026-02-27 01:00:41.333140 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.333143 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.333147 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.333151 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.333154 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.333158 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.333161 | orchestrator | 2026-02-27 01:00:41.333182 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-27 01:00:41.333189 | orchestrator | Friday 27 February 2026 00:58:08 +0000 (0:00:00.732) 0:09:33.743 ******* 2026-02-27 01:00:41.333194 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.333200 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.333205 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.333212 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.333218 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.333224 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.333229 | orchestrator | 2026-02-27 01:00:41.333235 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-27 01:00:41.333241 | orchestrator | Friday 27 February 2026 00:58:08 +0000 (0:00:00.535) 0:09:34.279 ******* 2026-02-27 01:00:41.333248 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.333252 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.333256 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.333259 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.333263 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.333267 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.333270 | orchestrator | 2026-02-27 01:00:41.333274 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-27 01:00:41.333278 | orchestrator | Friday 27 February 2026 00:58:09 +0000 (0:00:00.687) 0:09:34.967 ******* 2026-02-27 01:00:41.333282 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.333287 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.333293 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.333298 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:00:41.333304 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:00:41.333310 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:00:41.333316 | orchestrator | 2026-02-27 01:00:41.333321 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-27 01:00:41.333328 | orchestrator | Friday 27 February 2026 00:58:10 +0000 (0:00:00.545) 0:09:35.512 ******* 2026-02-27 01:00:41.333338 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.333344 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.333349 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.333356 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.333362 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.333367 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.333382 | orchestrator | 2026-02-27 01:00:41.333386 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-27 01:00:41.333389 | orchestrator | Friday 27 February 2026 00:58:11 +0000 (0:00:00.897) 0:09:36.410 ******* 2026-02-27 01:00:41.333393 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.333397 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.333400 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.333404 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.333408 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.333411 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.333415 | orchestrator | 2026-02-27 01:00:41.333419 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-27 01:00:41.333422 | orchestrator | Friday 27 February 2026 00:58:11 +0000 (0:00:00.670) 0:09:37.081 ******* 2026-02-27 01:00:41.333426 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.333430 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.333433 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.333437 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.333441 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.333444 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.333448 | orchestrator | 2026-02-27 01:00:41.333452 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-02-27 01:00:41.333455 | orchestrator | Friday 27 February 2026 00:58:13 +0000 (0:00:01.385) 0:09:38.466 ******* 2026-02-27 01:00:41.333459 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-27 01:00:41.333463 | orchestrator | 2026-02-27 01:00:41.333467 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-02-27 01:00:41.333470 | orchestrator | Friday 27 February 2026 00:58:17 +0000 (0:00:04.031) 0:09:42.497 ******* 2026-02-27 01:00:41.333474 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-27 01:00:41.333478 | orchestrator | 2026-02-27 01:00:41.333481 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-02-27 01:00:41.333485 | orchestrator | Friday 27 February 2026 00:58:19 +0000 (0:00:01.983) 0:09:44.481 ******* 2026-02-27 01:00:41.333489 | orchestrator | changed: [testbed-node-4] 2026-02-27 01:00:41.333496 | orchestrator | changed: [testbed-node-3] 2026-02-27 01:00:41.333500 | orchestrator | changed: [testbed-node-5] 2026-02-27 01:00:41.333503 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.333509 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:00:41.333515 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:00:41.333521 | orchestrator | 2026-02-27 01:00:41.333527 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-02-27 01:00:41.333533 | orchestrator | Friday 27 February 2026 00:58:20 +0000 (0:00:01.839) 0:09:46.320 ******* 2026-02-27 01:00:41.333539 | orchestrator | changed: [testbed-node-3] 2026-02-27 01:00:41.333546 | orchestrator | changed: [testbed-node-4] 2026-02-27 01:00:41.333552 | orchestrator | changed: [testbed-node-5] 2026-02-27 01:00:41.333557 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:00:41.333564 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:00:41.333569 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:00:41.333576 | orchestrator | 2026-02-27 01:00:41.333582 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-02-27 01:00:41.333588 | orchestrator | Friday 27 February 2026 00:58:22 +0000 (0:00:01.130) 0:09:47.450 ******* 2026-02-27 01:00:41.333594 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:00:41.333601 | orchestrator | 2026-02-27 01:00:41.333606 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-02-27 01:00:41.333612 | orchestrator | Friday 27 February 2026 00:58:23 +0000 (0:00:01.392) 0:09:48.843 ******* 2026-02-27 01:00:41.333619 | orchestrator | changed: [testbed-node-3] 2026-02-27 01:00:41.333625 | orchestrator | changed: [testbed-node-4] 2026-02-27 01:00:41.333632 | orchestrator | changed: [testbed-node-5] 2026-02-27 01:00:41.333644 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:00:41.333649 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:00:41.333655 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:00:41.333660 | orchestrator | 2026-02-27 01:00:41.333666 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-02-27 01:00:41.333672 | orchestrator | Friday 27 February 2026 00:58:26 +0000 (0:00:02.583) 0:09:51.427 ******* 2026-02-27 01:00:41.333678 | orchestrator | changed: [testbed-node-3] 2026-02-27 01:00:41.333685 | orchestrator | changed: [testbed-node-5] 2026-02-27 01:00:41.333691 | orchestrator | changed: [testbed-node-4] 2026-02-27 01:00:41.333697 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:00:41.333703 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:00:41.333709 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:00:41.333715 | orchestrator | 2026-02-27 01:00:41.333722 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-02-27 01:00:41.333728 | orchestrator | Friday 27 February 2026 00:58:29 +0000 (0:00:03.575) 0:09:55.002 ******* 2026-02-27 01:00:41.333734 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:00:41.333741 | orchestrator | 2026-02-27 01:00:41.333747 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-02-27 01:00:41.333753 | orchestrator | Friday 27 February 2026 00:58:31 +0000 (0:00:01.459) 0:09:56.462 ******* 2026-02-27 01:00:41.333760 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.333766 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.333772 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.333779 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.333785 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.333791 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.333798 | orchestrator | 2026-02-27 01:00:41.333804 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-02-27 01:00:41.333811 | orchestrator | Friday 27 February 2026 00:58:32 +0000 (0:00:01.302) 0:09:57.765 ******* 2026-02-27 01:00:41.333821 | orchestrator | changed: [testbed-node-3] 2026-02-27 01:00:41.333828 | orchestrator | changed: [testbed-node-5] 2026-02-27 01:00:41.333834 | orchestrator | changed: [testbed-node-4] 2026-02-27 01:00:41.333840 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:00:41.333847 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:00:41.333853 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:00:41.333859 | orchestrator | 2026-02-27 01:00:41.333866 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-02-27 01:00:41.333872 | orchestrator | Friday 27 February 2026 00:58:35 +0000 (0:00:02.980) 0:10:00.745 ******* 2026-02-27 01:00:41.333879 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.333884 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.333890 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.333897 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:00:41.333903 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:00:41.333910 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:00:41.333916 | orchestrator | 2026-02-27 01:00:41.333923 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-02-27 01:00:41.333930 | orchestrator | 2026-02-27 01:00:41.333937 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-27 01:00:41.333943 | orchestrator | Friday 27 February 2026 00:58:36 +0000 (0:00:01.189) 0:10:01.935 ******* 2026-02-27 01:00:41.333950 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-27 01:00:41.333957 | orchestrator | 2026-02-27 01:00:41.333964 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-27 01:00:41.333970 | orchestrator | Friday 27 February 2026 00:58:37 +0000 (0:00:00.537) 0:10:02.472 ******* 2026-02-27 01:00:41.333977 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-27 01:00:41.333987 | orchestrator | 2026-02-27 01:00:41.333993 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-27 01:00:41.333999 | orchestrator | Friday 27 February 2026 00:58:37 +0000 (0:00:00.890) 0:10:03.362 ******* 2026-02-27 01:00:41.334004 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.334011 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.334127 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.334133 | orchestrator | 2026-02-27 01:00:41.334145 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-27 01:00:41.334151 | orchestrator | Friday 27 February 2026 00:58:38 +0000 (0:00:00.346) 0:10:03.708 ******* 2026-02-27 01:00:41.334158 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.334202 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.334209 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.334215 | orchestrator | 2026-02-27 01:00:41.334220 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-27 01:00:41.334224 | orchestrator | Friday 27 February 2026 00:58:39 +0000 (0:00:01.094) 0:10:04.803 ******* 2026-02-27 01:00:41.334228 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.334232 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.334235 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.334239 | orchestrator | 2026-02-27 01:00:41.334243 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-27 01:00:41.334246 | orchestrator | Friday 27 February 2026 00:58:40 +0000 (0:00:01.110) 0:10:05.914 ******* 2026-02-27 01:00:41.334250 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.334254 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.334257 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.334261 | orchestrator | 2026-02-27 01:00:41.334265 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-27 01:00:41.334269 | orchestrator | Friday 27 February 2026 00:58:41 +0000 (0:00:00.857) 0:10:06.771 ******* 2026-02-27 01:00:41.334272 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.334276 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.334280 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.334283 | orchestrator | 2026-02-27 01:00:41.334287 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-27 01:00:41.334291 | orchestrator | Friday 27 February 2026 00:58:41 +0000 (0:00:00.365) 0:10:07.136 ******* 2026-02-27 01:00:41.334295 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.334298 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.334302 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.334306 | orchestrator | 2026-02-27 01:00:41.334309 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-27 01:00:41.334313 | orchestrator | Friday 27 February 2026 00:58:42 +0000 (0:00:00.332) 0:10:07.469 ******* 2026-02-27 01:00:41.334317 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.334320 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.334324 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.334328 | orchestrator | 2026-02-27 01:00:41.334332 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-27 01:00:41.334335 | orchestrator | Friday 27 February 2026 00:58:42 +0000 (0:00:00.628) 0:10:08.097 ******* 2026-02-27 01:00:41.334339 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.334343 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.334346 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.334350 | orchestrator | 2026-02-27 01:00:41.334354 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-27 01:00:41.334357 | orchestrator | Friday 27 February 2026 00:58:43 +0000 (0:00:00.905) 0:10:09.003 ******* 2026-02-27 01:00:41.334361 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.334365 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.334368 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.334372 | orchestrator | 2026-02-27 01:00:41.334376 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-27 01:00:41.334384 | orchestrator | Friday 27 February 2026 00:58:44 +0000 (0:00:00.797) 0:10:09.801 ******* 2026-02-27 01:00:41.334388 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.334392 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.334396 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.334399 | orchestrator | 2026-02-27 01:00:41.334403 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-27 01:00:41.334407 | orchestrator | Friday 27 February 2026 00:58:44 +0000 (0:00:00.364) 0:10:10.166 ******* 2026-02-27 01:00:41.334415 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.334419 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.334422 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.334426 | orchestrator | 2026-02-27 01:00:41.334430 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-27 01:00:41.334433 | orchestrator | Friday 27 February 2026 00:58:45 +0000 (0:00:00.632) 0:10:10.799 ******* 2026-02-27 01:00:41.334437 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.334441 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.334444 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.334448 | orchestrator | 2026-02-27 01:00:41.334452 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-27 01:00:41.334456 | orchestrator | Friday 27 February 2026 00:58:45 +0000 (0:00:00.403) 0:10:11.202 ******* 2026-02-27 01:00:41.334459 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.334463 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.334467 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.334470 | orchestrator | 2026-02-27 01:00:41.334474 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-27 01:00:41.334478 | orchestrator | Friday 27 February 2026 00:58:46 +0000 (0:00:00.393) 0:10:11.596 ******* 2026-02-27 01:00:41.334482 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.334485 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.334489 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.334493 | orchestrator | 2026-02-27 01:00:41.334496 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-27 01:00:41.334500 | orchestrator | Friday 27 February 2026 00:58:46 +0000 (0:00:00.380) 0:10:11.976 ******* 2026-02-27 01:00:41.334504 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.334508 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.334511 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.334515 | orchestrator | 2026-02-27 01:00:41.334519 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-27 01:00:41.334522 | orchestrator | Friday 27 February 2026 00:58:47 +0000 (0:00:00.718) 0:10:12.695 ******* 2026-02-27 01:00:41.334527 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.334533 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.334539 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.334545 | orchestrator | 2026-02-27 01:00:41.334551 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-27 01:00:41.334561 | orchestrator | Friday 27 February 2026 00:58:47 +0000 (0:00:00.374) 0:10:13.069 ******* 2026-02-27 01:00:41.334567 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.334573 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.334579 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.334585 | orchestrator | 2026-02-27 01:00:41.334591 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-27 01:00:41.334597 | orchestrator | Friday 27 February 2026 00:58:48 +0000 (0:00:00.403) 0:10:13.473 ******* 2026-02-27 01:00:41.334603 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.334609 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.334615 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.334621 | orchestrator | 2026-02-27 01:00:41.334627 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-27 01:00:41.334633 | orchestrator | Friday 27 February 2026 00:58:48 +0000 (0:00:00.363) 0:10:13.837 ******* 2026-02-27 01:00:41.334645 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.334651 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.334658 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.334663 | orchestrator | 2026-02-27 01:00:41.334669 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-02-27 01:00:41.334675 | orchestrator | Friday 27 February 2026 00:58:49 +0000 (0:00:01.035) 0:10:14.872 ******* 2026-02-27 01:00:41.334680 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.334686 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.334693 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-02-27 01:00:41.334699 | orchestrator | 2026-02-27 01:00:41.334705 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-02-27 01:00:41.334711 | orchestrator | Friday 27 February 2026 00:58:49 +0000 (0:00:00.471) 0:10:15.343 ******* 2026-02-27 01:00:41.334717 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-27 01:00:41.334724 | orchestrator | 2026-02-27 01:00:41.334730 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-02-27 01:00:41.334736 | orchestrator | Friday 27 February 2026 00:58:52 +0000 (0:00:02.294) 0:10:17.638 ******* 2026-02-27 01:00:41.334746 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-02-27 01:00:41.334754 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.334761 | orchestrator | 2026-02-27 01:00:41.334767 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-02-27 01:00:41.334773 | orchestrator | Friday 27 February 2026 00:58:53 +0000 (0:00:00.842) 0:10:18.481 ******* 2026-02-27 01:00:41.334783 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-27 01:00:41.334796 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-27 01:00:41.334803 | orchestrator | 2026-02-27 01:00:41.334810 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-02-27 01:00:41.334822 | orchestrator | Friday 27 February 2026 00:58:59 +0000 (0:00:06.862) 0:10:25.343 ******* 2026-02-27 01:00:41.334829 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-27 01:00:41.334836 | orchestrator | 2026-02-27 01:00:41.334841 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-02-27 01:00:41.334847 | orchestrator | Friday 27 February 2026 00:59:03 +0000 (0:00:03.324) 0:10:28.668 ******* 2026-02-27 01:00:41.334853 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-27 01:00:41.334859 | orchestrator | 2026-02-27 01:00:41.334865 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-02-27 01:00:41.334871 | orchestrator | Friday 27 February 2026 00:59:03 +0000 (0:00:00.560) 0:10:29.229 ******* 2026-02-27 01:00:41.334878 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-27 01:00:41.334884 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-27 01:00:41.334891 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-27 01:00:41.334896 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-02-27 01:00:41.334903 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-02-27 01:00:41.334913 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-02-27 01:00:41.334917 | orchestrator | 2026-02-27 01:00:41.334921 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-02-27 01:00:41.334927 | orchestrator | Friday 27 February 2026 00:59:04 +0000 (0:00:01.149) 0:10:30.378 ******* 2026-02-27 01:00:41.334933 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-27 01:00:41.334939 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-27 01:00:41.334946 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-27 01:00:41.334952 | orchestrator | 2026-02-27 01:00:41.334958 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-02-27 01:00:41.334964 | orchestrator | Friday 27 February 2026 00:59:07 +0000 (0:00:02.762) 0:10:33.141 ******* 2026-02-27 01:00:41.334970 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-27 01:00:41.334981 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-27 01:00:41.334989 | orchestrator | changed: [testbed-node-3] 2026-02-27 01:00:41.334993 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-27 01:00:41.334997 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-27 01:00:41.335001 | orchestrator | changed: [testbed-node-4] 2026-02-27 01:00:41.335005 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-27 01:00:41.335009 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-27 01:00:41.335012 | orchestrator | changed: [testbed-node-5] 2026-02-27 01:00:41.335016 | orchestrator | 2026-02-27 01:00:41.335020 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-02-27 01:00:41.335024 | orchestrator | Friday 27 February 2026 00:59:09 +0000 (0:00:01.543) 0:10:34.684 ******* 2026-02-27 01:00:41.335028 | orchestrator | changed: [testbed-node-3] 2026-02-27 01:00:41.335031 | orchestrator | changed: [testbed-node-4] 2026-02-27 01:00:41.335035 | orchestrator | changed: [testbed-node-5] 2026-02-27 01:00:41.335039 | orchestrator | 2026-02-27 01:00:41.335042 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-02-27 01:00:41.335046 | orchestrator | Friday 27 February 2026 00:59:12 +0000 (0:00:02.929) 0:10:37.613 ******* 2026-02-27 01:00:41.335050 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.335053 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.335057 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.335061 | orchestrator | 2026-02-27 01:00:41.335065 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-02-27 01:00:41.335068 | orchestrator | Friday 27 February 2026 00:59:12 +0000 (0:00:00.316) 0:10:37.929 ******* 2026-02-27 01:00:41.335072 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-27 01:00:41.335076 | orchestrator | 2026-02-27 01:00:41.335079 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-02-27 01:00:41.335083 | orchestrator | Friday 27 February 2026 00:59:13 +0000 (0:00:00.835) 0:10:38.765 ******* 2026-02-27 01:00:41.335087 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-27 01:00:41.335091 | orchestrator | 2026-02-27 01:00:41.335094 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-02-27 01:00:41.335098 | orchestrator | Friday 27 February 2026 00:59:13 +0000 (0:00:00.570) 0:10:39.336 ******* 2026-02-27 01:00:41.335102 | orchestrator | changed: [testbed-node-3] 2026-02-27 01:00:41.335106 | orchestrator | changed: [testbed-node-4] 2026-02-27 01:00:41.335109 | orchestrator | changed: [testbed-node-5] 2026-02-27 01:00:41.335113 | orchestrator | 2026-02-27 01:00:41.335117 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-02-27 01:00:41.335120 | orchestrator | Friday 27 February 2026 00:59:15 +0000 (0:00:01.263) 0:10:40.599 ******* 2026-02-27 01:00:41.335124 | orchestrator | changed: [testbed-node-3] 2026-02-27 01:00:41.335128 | orchestrator | changed: [testbed-node-4] 2026-02-27 01:00:41.335135 | orchestrator | changed: [testbed-node-5] 2026-02-27 01:00:41.335139 | orchestrator | 2026-02-27 01:00:41.335143 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-02-27 01:00:41.335146 | orchestrator | Friday 27 February 2026 00:59:16 +0000 (0:00:01.519) 0:10:42.118 ******* 2026-02-27 01:00:41.335150 | orchestrator | changed: [testbed-node-3] 2026-02-27 01:00:41.335154 | orchestrator | changed: [testbed-node-4] 2026-02-27 01:00:41.335158 | orchestrator | changed: [testbed-node-5] 2026-02-27 01:00:41.335161 | orchestrator | 2026-02-27 01:00:41.335204 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-02-27 01:00:41.335210 | orchestrator | Friday 27 February 2026 00:59:18 +0000 (0:00:02.029) 0:10:44.148 ******* 2026-02-27 01:00:41.335216 | orchestrator | changed: [testbed-node-3] 2026-02-27 01:00:41.335226 | orchestrator | changed: [testbed-node-5] 2026-02-27 01:00:41.335230 | orchestrator | changed: [testbed-node-4] 2026-02-27 01:00:41.335234 | orchestrator | 2026-02-27 01:00:41.335238 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-02-27 01:00:41.335241 | orchestrator | Friday 27 February 2026 00:59:20 +0000 (0:00:02.033) 0:10:46.181 ******* 2026-02-27 01:00:41.335245 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.335249 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.335253 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.335256 | orchestrator | 2026-02-27 01:00:41.335260 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-27 01:00:41.335264 | orchestrator | Friday 27 February 2026 00:59:22 +0000 (0:00:01.587) 0:10:47.769 ******* 2026-02-27 01:00:41.335268 | orchestrator | changed: [testbed-node-3] 2026-02-27 01:00:41.335271 | orchestrator | changed: [testbed-node-4] 2026-02-27 01:00:41.335275 | orchestrator | changed: [testbed-node-5] 2026-02-27 01:00:41.335279 | orchestrator | 2026-02-27 01:00:41.335282 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-27 01:00:41.335286 | orchestrator | Friday 27 February 2026 00:59:23 +0000 (0:00:00.768) 0:10:48.537 ******* 2026-02-27 01:00:41.335290 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-27 01:00:41.335294 | orchestrator | 2026-02-27 01:00:41.335297 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-02-27 01:00:41.335301 | orchestrator | Friday 27 February 2026 00:59:23 +0000 (0:00:00.827) 0:10:49.364 ******* 2026-02-27 01:00:41.335305 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.335309 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.335312 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.335316 | orchestrator | 2026-02-27 01:00:41.335320 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-02-27 01:00:41.335323 | orchestrator | Friday 27 February 2026 00:59:24 +0000 (0:00:00.343) 0:10:49.708 ******* 2026-02-27 01:00:41.335327 | orchestrator | changed: [testbed-node-3] 2026-02-27 01:00:41.335331 | orchestrator | changed: [testbed-node-4] 2026-02-27 01:00:41.335334 | orchestrator | changed: [testbed-node-5] 2026-02-27 01:00:41.335338 | orchestrator | 2026-02-27 01:00:41.335342 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-02-27 01:00:41.335346 | orchestrator | Friday 27 February 2026 00:59:25 +0000 (0:00:01.225) 0:10:50.933 ******* 2026-02-27 01:00:41.335350 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-27 01:00:41.335354 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-27 01:00:41.335358 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-27 01:00:41.335361 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.335365 | orchestrator | 2026-02-27 01:00:41.335369 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-02-27 01:00:41.335373 | orchestrator | Friday 27 February 2026 00:59:26 +0000 (0:00:00.925) 0:10:51.859 ******* 2026-02-27 01:00:41.335377 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.335381 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.335389 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.335395 | orchestrator | 2026-02-27 01:00:41.335401 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-02-27 01:00:41.335407 | orchestrator | 2026-02-27 01:00:41.335413 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-27 01:00:41.335419 | orchestrator | Friday 27 February 2026 00:59:27 +0000 (0:00:00.897) 0:10:52.756 ******* 2026-02-27 01:00:41.335425 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-27 01:00:41.335431 | orchestrator | 2026-02-27 01:00:41.335438 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-27 01:00:41.335442 | orchestrator | Friday 27 February 2026 00:59:27 +0000 (0:00:00.526) 0:10:53.283 ******* 2026-02-27 01:00:41.335448 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-27 01:00:41.335454 | orchestrator | 2026-02-27 01:00:41.335460 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-27 01:00:41.335510 | orchestrator | Friday 27 February 2026 00:59:28 +0000 (0:00:00.794) 0:10:54.077 ******* 2026-02-27 01:00:41.335518 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.335524 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.335530 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.335536 | orchestrator | 2026-02-27 01:00:41.335542 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-27 01:00:41.335548 | orchestrator | Friday 27 February 2026 00:59:29 +0000 (0:00:00.344) 0:10:54.422 ******* 2026-02-27 01:00:41.335554 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.335561 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.335567 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.335573 | orchestrator | 2026-02-27 01:00:41.335579 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-27 01:00:41.335586 | orchestrator | Friday 27 February 2026 00:59:29 +0000 (0:00:00.770) 0:10:55.192 ******* 2026-02-27 01:00:41.335592 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.335598 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.335605 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.335611 | orchestrator | 2026-02-27 01:00:41.335618 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-27 01:00:41.335624 | orchestrator | Friday 27 February 2026 00:59:30 +0000 (0:00:01.048) 0:10:56.240 ******* 2026-02-27 01:00:41.335631 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.335637 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.335643 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.335650 | orchestrator | 2026-02-27 01:00:41.335656 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-27 01:00:41.335662 | orchestrator | Friday 27 February 2026 00:59:31 +0000 (0:00:00.753) 0:10:56.994 ******* 2026-02-27 01:00:41.335669 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.335675 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.335682 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.335688 | orchestrator | 2026-02-27 01:00:41.335699 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-27 01:00:41.335706 | orchestrator | Friday 27 February 2026 00:59:31 +0000 (0:00:00.321) 0:10:57.315 ******* 2026-02-27 01:00:41.335712 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.335718 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.335725 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.335731 | orchestrator | 2026-02-27 01:00:41.335738 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-27 01:00:41.335744 | orchestrator | Friday 27 February 2026 00:59:32 +0000 (0:00:00.373) 0:10:57.689 ******* 2026-02-27 01:00:41.335750 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.335758 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.335769 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.335776 | orchestrator | 2026-02-27 01:00:41.335783 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-27 01:00:41.335790 | orchestrator | Friday 27 February 2026 00:59:32 +0000 (0:00:00.666) 0:10:58.355 ******* 2026-02-27 01:00:41.335796 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.335803 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.335810 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.335816 | orchestrator | 2026-02-27 01:00:41.335822 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-27 01:00:41.335828 | orchestrator | Friday 27 February 2026 00:59:33 +0000 (0:00:00.753) 0:10:59.109 ******* 2026-02-27 01:00:41.335833 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.335840 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.335845 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.335852 | orchestrator | 2026-02-27 01:00:41.335858 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-27 01:00:41.335864 | orchestrator | Friday 27 February 2026 00:59:34 +0000 (0:00:00.706) 0:10:59.816 ******* 2026-02-27 01:00:41.335871 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.335875 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.335879 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.335883 | orchestrator | 2026-02-27 01:00:41.335886 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-27 01:00:41.335890 | orchestrator | Friday 27 February 2026 00:59:34 +0000 (0:00:00.386) 0:11:00.203 ******* 2026-02-27 01:00:41.335894 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.335897 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.335904 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.335909 | orchestrator | 2026-02-27 01:00:41.335915 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-27 01:00:41.335921 | orchestrator | Friday 27 February 2026 00:59:35 +0000 (0:00:00.623) 0:11:00.827 ******* 2026-02-27 01:00:41.335927 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.335934 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.335940 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.335946 | orchestrator | 2026-02-27 01:00:41.335952 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-27 01:00:41.335959 | orchestrator | Friday 27 February 2026 00:59:35 +0000 (0:00:00.348) 0:11:01.176 ******* 2026-02-27 01:00:41.335965 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.335971 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.335978 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.335984 | orchestrator | 2026-02-27 01:00:41.335990 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-27 01:00:41.335994 | orchestrator | Friday 27 February 2026 00:59:36 +0000 (0:00:00.350) 0:11:01.526 ******* 2026-02-27 01:00:41.335997 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.336001 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.336005 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.336008 | orchestrator | 2026-02-27 01:00:41.336012 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-27 01:00:41.336016 | orchestrator | Friday 27 February 2026 00:59:36 +0000 (0:00:00.355) 0:11:01.882 ******* 2026-02-27 01:00:41.336020 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.336023 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.336027 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.336031 | orchestrator | 2026-02-27 01:00:41.336034 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-27 01:00:41.336038 | orchestrator | Friday 27 February 2026 00:59:37 +0000 (0:00:00.635) 0:11:02.518 ******* 2026-02-27 01:00:41.336042 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.336046 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.336049 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.336053 | orchestrator | 2026-02-27 01:00:41.336057 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-27 01:00:41.336067 | orchestrator | Friday 27 February 2026 00:59:37 +0000 (0:00:00.354) 0:11:02.873 ******* 2026-02-27 01:00:41.336071 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.336074 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.336078 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.336082 | orchestrator | 2026-02-27 01:00:41.336086 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-27 01:00:41.336089 | orchestrator | Friday 27 February 2026 00:59:37 +0000 (0:00:00.332) 0:11:03.206 ******* 2026-02-27 01:00:41.336093 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.336097 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.336100 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.336104 | orchestrator | 2026-02-27 01:00:41.336108 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-27 01:00:41.336112 | orchestrator | Friday 27 February 2026 00:59:38 +0000 (0:00:00.363) 0:11:03.570 ******* 2026-02-27 01:00:41.336115 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.336119 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.336123 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.336126 | orchestrator | 2026-02-27 01:00:41.336130 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-27 01:00:41.336134 | orchestrator | Friday 27 February 2026 00:59:39 +0000 (0:00:01.022) 0:11:04.593 ******* 2026-02-27 01:00:41.336138 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-27 01:00:41.336141 | orchestrator | 2026-02-27 01:00:41.336145 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-27 01:00:41.336153 | orchestrator | Friday 27 February 2026 00:59:39 +0000 (0:00:00.635) 0:11:05.229 ******* 2026-02-27 01:00:41.336157 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-27 01:00:41.336161 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-27 01:00:41.336181 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-27 01:00:41.336187 | orchestrator | 2026-02-27 01:00:41.336193 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-27 01:00:41.336198 | orchestrator | Friday 27 February 2026 00:59:42 +0000 (0:00:02.407) 0:11:07.637 ******* 2026-02-27 01:00:41.336204 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-27 01:00:41.336210 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-27 01:00:41.336216 | orchestrator | changed: [testbed-node-3] 2026-02-27 01:00:41.336222 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-27 01:00:41.336228 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-27 01:00:41.336234 | orchestrator | changed: [testbed-node-4] 2026-02-27 01:00:41.336239 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-27 01:00:41.336245 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-27 01:00:41.336251 | orchestrator | changed: [testbed-node-5] 2026-02-27 01:00:41.336256 | orchestrator | 2026-02-27 01:00:41.336262 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-27 01:00:41.336268 | orchestrator | Friday 27 February 2026 00:59:44 +0000 (0:00:01.781) 0:11:09.418 ******* 2026-02-27 01:00:41.336274 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.336280 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.336286 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.336291 | orchestrator | 2026-02-27 01:00:41.336297 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-27 01:00:41.336302 | orchestrator | Friday 27 February 2026 00:59:44 +0000 (0:00:00.387) 0:11:09.806 ******* 2026-02-27 01:00:41.336307 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-27 01:00:41.336313 | orchestrator | 2026-02-27 01:00:41.336319 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-27 01:00:41.336337 | orchestrator | Friday 27 February 2026 00:59:45 +0000 (0:00:00.763) 0:11:10.569 ******* 2026-02-27 01:00:41.336343 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-27 01:00:41.336350 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-27 01:00:41.336356 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-27 01:00:41.336362 | orchestrator | 2026-02-27 01:00:41.336368 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-27 01:00:41.336374 | orchestrator | Friday 27 February 2026 00:59:46 +0000 (0:00:01.613) 0:11:12.182 ******* 2026-02-27 01:00:41.336380 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-27 01:00:41.336386 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-27 01:00:41.336392 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-27 01:00:41.336399 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-27 01:00:41.336404 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-27 01:00:41.336411 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-27 01:00:41.336417 | orchestrator | 2026-02-27 01:00:41.336422 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-27 01:00:41.336429 | orchestrator | Friday 27 February 2026 00:59:51 +0000 (0:00:04.887) 0:11:17.069 ******* 2026-02-27 01:00:41.336435 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-27 01:00:41.336441 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-27 01:00:41.336447 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-27 01:00:41.336454 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-27 01:00:41.336460 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-27 01:00:41.336467 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-27 01:00:41.336473 | orchestrator | 2026-02-27 01:00:41.336479 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-27 01:00:41.336485 | orchestrator | Friday 27 February 2026 00:59:54 +0000 (0:00:02.496) 0:11:19.565 ******* 2026-02-27 01:00:41.336491 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-27 01:00:41.336497 | orchestrator | changed: [testbed-node-3] 2026-02-27 01:00:41.336503 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-27 01:00:41.336509 | orchestrator | changed: [testbed-node-4] 2026-02-27 01:00:41.336515 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-27 01:00:41.336521 | orchestrator | changed: [testbed-node-5] 2026-02-27 01:00:41.336528 | orchestrator | 2026-02-27 01:00:41.336534 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-27 01:00:41.336545 | orchestrator | Friday 27 February 2026 00:59:55 +0000 (0:00:01.296) 0:11:20.862 ******* 2026-02-27 01:00:41.336552 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-02-27 01:00:41.336558 | orchestrator | 2026-02-27 01:00:41.336564 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-27 01:00:41.336571 | orchestrator | Friday 27 February 2026 00:59:55 +0000 (0:00:00.233) 0:11:21.095 ******* 2026-02-27 01:00:41.336577 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-27 01:00:41.336589 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-27 01:00:41.336595 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-27 01:00:41.336602 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-27 01:00:41.336608 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-27 01:00:41.336614 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.336620 | orchestrator | 2026-02-27 01:00:41.336627 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-27 01:00:41.336633 | orchestrator | Friday 27 February 2026 00:59:56 +0000 (0:00:01.266) 0:11:22.362 ******* 2026-02-27 01:00:41.336640 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-27 01:00:41.336646 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-27 01:00:41.336656 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-27 01:00:41.336663 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-27 01:00:41.336669 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-27 01:00:41.336675 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.336682 | orchestrator | 2026-02-27 01:00:41.336688 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-27 01:00:41.336694 | orchestrator | Friday 27 February 2026 00:59:57 +0000 (0:00:00.655) 0:11:23.017 ******* 2026-02-27 01:00:41.336701 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-27 01:00:41.336707 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-27 01:00:41.336713 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-27 01:00:41.336719 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-27 01:00:41.336728 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-27 01:00:41.336734 | orchestrator | 2026-02-27 01:00:41.336741 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-27 01:00:41.336748 | orchestrator | Friday 27 February 2026 01:00:28 +0000 (0:00:30.503) 0:11:53.521 ******* 2026-02-27 01:00:41.336755 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.336761 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.336767 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.336773 | orchestrator | 2026-02-27 01:00:41.336779 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-27 01:00:41.336785 | orchestrator | Friday 27 February 2026 01:00:28 +0000 (0:00:00.341) 0:11:53.863 ******* 2026-02-27 01:00:41.336789 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.336793 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.336797 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.336808 | orchestrator | 2026-02-27 01:00:41.336812 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-27 01:00:41.336816 | orchestrator | Friday 27 February 2026 01:00:28 +0000 (0:00:00.340) 0:11:54.204 ******* 2026-02-27 01:00:41.336819 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-27 01:00:41.336823 | orchestrator | 2026-02-27 01:00:41.336827 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-27 01:00:41.336831 | orchestrator | Friday 27 February 2026 01:00:29 +0000 (0:00:00.903) 0:11:55.107 ******* 2026-02-27 01:00:41.336834 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-27 01:00:41.336838 | orchestrator | 2026-02-27 01:00:41.336845 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-27 01:00:41.336849 | orchestrator | Friday 27 February 2026 01:00:30 +0000 (0:00:00.581) 0:11:55.689 ******* 2026-02-27 01:00:41.336852 | orchestrator | changed: [testbed-node-3] 2026-02-27 01:00:41.336856 | orchestrator | changed: [testbed-node-4] 2026-02-27 01:00:41.336860 | orchestrator | changed: [testbed-node-5] 2026-02-27 01:00:41.336864 | orchestrator | 2026-02-27 01:00:41.336867 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-27 01:00:41.336871 | orchestrator | Friday 27 February 2026 01:00:31 +0000 (0:00:01.335) 0:11:57.024 ******* 2026-02-27 01:00:41.336875 | orchestrator | changed: [testbed-node-3] 2026-02-27 01:00:41.336879 | orchestrator | changed: [testbed-node-4] 2026-02-27 01:00:41.336882 | orchestrator | changed: [testbed-node-5] 2026-02-27 01:00:41.336886 | orchestrator | 2026-02-27 01:00:41.336890 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-27 01:00:41.336894 | orchestrator | Friday 27 February 2026 01:00:33 +0000 (0:00:01.561) 0:11:58.585 ******* 2026-02-27 01:00:41.336897 | orchestrator | changed: [testbed-node-3] 2026-02-27 01:00:41.336901 | orchestrator | changed: [testbed-node-5] 2026-02-27 01:00:41.336905 | orchestrator | changed: [testbed-node-4] 2026-02-27 01:00:41.336908 | orchestrator | 2026-02-27 01:00:41.336912 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-27 01:00:41.336916 | orchestrator | Friday 27 February 2026 01:00:35 +0000 (0:00:01.839) 0:12:00.425 ******* 2026-02-27 01:00:41.336920 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-27 01:00:41.336923 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-27 01:00:41.336927 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-27 01:00:41.336931 | orchestrator | 2026-02-27 01:00:41.336935 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-27 01:00:41.336938 | orchestrator | Friday 27 February 2026 01:00:37 +0000 (0:00:02.784) 0:12:03.210 ******* 2026-02-27 01:00:41.336942 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.336949 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.336953 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.336957 | orchestrator | 2026-02-27 01:00:41.336960 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-27 01:00:41.336964 | orchestrator | Friday 27 February 2026 01:00:38 +0000 (0:00:00.380) 0:12:03.590 ******* 2026-02-27 01:00:41.336968 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-27 01:00:41.336972 | orchestrator | 2026-02-27 01:00:41.336975 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-02-27 01:00:41.336979 | orchestrator | Friday 27 February 2026 01:00:38 +0000 (0:00:00.550) 0:12:04.141 ******* 2026-02-27 01:00:41.336983 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.336986 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.336994 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.336997 | orchestrator | 2026-02-27 01:00:41.337001 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-02-27 01:00:41.337005 | orchestrator | Friday 27 February 2026 01:00:39 +0000 (0:00:00.639) 0:12:04.780 ******* 2026-02-27 01:00:41.337009 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.337012 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:00:41.337016 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:00:41.337020 | orchestrator | 2026-02-27 01:00:41.337028 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-02-27 01:00:41.337032 | orchestrator | Friday 27 February 2026 01:00:39 +0000 (0:00:00.351) 0:12:05.131 ******* 2026-02-27 01:00:41.337035 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-27 01:00:41.337039 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-27 01:00:41.337043 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-27 01:00:41.337047 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:00:41.337050 | orchestrator | 2026-02-27 01:00:41.337054 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-02-27 01:00:41.337058 | orchestrator | Friday 27 February 2026 01:00:40 +0000 (0:00:00.614) 0:12:05.745 ******* 2026-02-27 01:00:41.337061 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:00:41.337065 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:00:41.337069 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:00:41.337072 | orchestrator | 2026-02-27 01:00:41.337076 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 01:00:41.337080 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-02-27 01:00:41.337084 | orchestrator | testbed-node-1 : ok=127  changed=32  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-02-27 01:00:41.337088 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-02-27 01:00:41.337091 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-02-27 01:00:41.337095 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-02-27 01:00:41.337101 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-02-27 01:00:41.337105 | orchestrator | 2026-02-27 01:00:41.337109 | orchestrator | 2026-02-27 01:00:41.337112 | orchestrator | 2026-02-27 01:00:41.337116 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 01:00:41.337120 | orchestrator | Friday 27 February 2026 01:00:40 +0000 (0:00:00.270) 0:12:06.016 ******* 2026-02-27 01:00:41.337123 | orchestrator | =============================================================================== 2026-02-27 01:00:41.337127 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 44.79s 2026-02-27 01:00:41.337131 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 41.35s 2026-02-27 01:00:41.337135 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.56s 2026-02-27 01:00:41.337138 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.50s 2026-02-27 01:00:41.337142 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.15s 2026-02-27 01:00:41.337146 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.13s 2026-02-27 01:00:41.337149 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.59s 2026-02-27 01:00:41.337153 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.69s 2026-02-27 01:00:41.337160 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.27s 2026-02-27 01:00:41.337181 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 7.61s 2026-02-27 01:00:41.337187 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.36s 2026-02-27 01:00:41.337193 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 6.86s 2026-02-27 01:00:41.337200 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.20s 2026-02-27 01:00:41.337206 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.89s 2026-02-27 01:00:41.337211 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 4.20s 2026-02-27 01:00:41.337221 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 4.18s 2026-02-27 01:00:41.337226 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 4.14s 2026-02-27 01:00:41.337229 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.03s 2026-02-27 01:00:41.337233 | orchestrator | ceph-config : Generate Ceph file ---------------------------------------- 3.70s 2026-02-27 01:00:41.337237 | orchestrator | ceph-container-common : Enable ceph.target ------------------------------ 3.60s 2026-02-27 01:00:41.337241 | orchestrator | 2026-02-27 01:00:41 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 01:00:41.337244 | orchestrator | 2026-02-27 01:00:41 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:00:44.362344 | orchestrator | 2026-02-27 01:00:44 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 01:00:44.364453 | orchestrator | 2026-02-27 01:00:44 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 01:00:44.366870 | orchestrator | 2026-02-27 01:00:44 | INFO  | Task 1736d673-bee3-4024-bd6f-ebda106f77ef is in state STARTED 2026-02-27 01:00:44.366895 | orchestrator | 2026-02-27 01:00:44 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:00:47.420331 | orchestrator | 2026-02-27 01:00:47 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 01:00:47.423156 | orchestrator | 2026-02-27 01:00:47 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 01:00:47.425564 | orchestrator | 2026-02-27 01:00:47 | INFO  | Task 1736d673-bee3-4024-bd6f-ebda106f77ef is in state STARTED 2026-02-27 01:00:47.426103 | orchestrator | 2026-02-27 01:00:47 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:00:50.466816 | orchestrator | 2026-02-27 01:00:50 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 01:00:50.468475 | orchestrator | 2026-02-27 01:00:50 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 01:00:50.470155 | orchestrator | 2026-02-27 01:00:50 | INFO  | Task 1736d673-bee3-4024-bd6f-ebda106f77ef is in state STARTED 2026-02-27 01:00:50.470223 | orchestrator | 2026-02-27 01:00:50 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:00:53.516000 | orchestrator | 2026-02-27 01:00:53 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 01:00:53.518106 | orchestrator | 2026-02-27 01:00:53 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 01:00:53.520262 | orchestrator | 2026-02-27 01:00:53 | INFO  | Task 1736d673-bee3-4024-bd6f-ebda106f77ef is in state STARTED 2026-02-27 01:00:53.520311 | orchestrator | 2026-02-27 01:00:53 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:00:56.576613 | orchestrator | 2026-02-27 01:00:56 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 01:00:56.577417 | orchestrator | 2026-02-27 01:00:56 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 01:00:56.579170 | orchestrator | 2026-02-27 01:00:56 | INFO  | Task 1736d673-bee3-4024-bd6f-ebda106f77ef is in state STARTED 2026-02-27 01:00:56.579302 | orchestrator | 2026-02-27 01:00:56 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:00:59.629256 | orchestrator | 2026-02-27 01:00:59 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 01:00:59.634775 | orchestrator | 2026-02-27 01:00:59 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 01:00:59.635966 | orchestrator | 2026-02-27 01:00:59 | INFO  | Task 1736d673-bee3-4024-bd6f-ebda106f77ef is in state STARTED 2026-02-27 01:00:59.636124 | orchestrator | 2026-02-27 01:00:59 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:01:02.709011 | orchestrator | 2026-02-27 01:01:02 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 01:01:02.715654 | orchestrator | 2026-02-27 01:01:02 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 01:01:02.718711 | orchestrator | 2026-02-27 01:01:02 | INFO  | Task 1736d673-bee3-4024-bd6f-ebda106f77ef is in state STARTED 2026-02-27 01:01:02.718770 | orchestrator | 2026-02-27 01:01:02 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:01:05.760268 | orchestrator | 2026-02-27 01:01:05 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 01:01:05.761384 | orchestrator | 2026-02-27 01:01:05 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 01:01:05.763932 | orchestrator | 2026-02-27 01:01:05 | INFO  | Task 1736d673-bee3-4024-bd6f-ebda106f77ef is in state STARTED 2026-02-27 01:01:05.763989 | orchestrator | 2026-02-27 01:01:05 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:01:08.812027 | orchestrator | 2026-02-27 01:01:08 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state STARTED 2026-02-27 01:01:08.814080 | orchestrator | 2026-02-27 01:01:08 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 01:01:08.816491 | orchestrator | 2026-02-27 01:01:08 | INFO  | Task 1736d673-bee3-4024-bd6f-ebda106f77ef is in state STARTED 2026-02-27 01:01:08.816528 | orchestrator | 2026-02-27 01:01:08 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:01:11.863655 | orchestrator | 2026-02-27 01:01:11 | INFO  | Task c04d931a-5cdd-4696-9142-da912dc92b59 is in state SUCCESS 2026-02-27 01:01:11.865968 | orchestrator | 2026-02-27 01:01:11.866084 | orchestrator | 2026-02-27 01:01:11.866102 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-27 01:01:11.866114 | orchestrator | 2026-02-27 01:01:11.866125 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-27 01:01:11.866135 | orchestrator | Friday 27 February 2026 00:58:19 +0000 (0:00:00.264) 0:00:00.264 ******* 2026-02-27 01:01:11.866145 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:01:11.866156 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:01:11.866166 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:01:11.866176 | orchestrator | 2026-02-27 01:01:11.866186 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-27 01:01:11.866196 | orchestrator | Friday 27 February 2026 00:58:19 +0000 (0:00:00.312) 0:00:00.577 ******* 2026-02-27 01:01:11.866206 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-02-27 01:01:11.866216 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-02-27 01:01:11.866225 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-02-27 01:01:11.866259 | orchestrator | 2026-02-27 01:01:11.866269 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-02-27 01:01:11.866302 | orchestrator | 2026-02-27 01:01:11.866312 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-27 01:01:11.866322 | orchestrator | Friday 27 February 2026 00:58:19 +0000 (0:00:00.467) 0:00:01.044 ******* 2026-02-27 01:01:11.866332 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:01:11.866342 | orchestrator | 2026-02-27 01:01:11.866351 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-02-27 01:01:11.866361 | orchestrator | Friday 27 February 2026 00:58:20 +0000 (0:00:00.515) 0:00:01.560 ******* 2026-02-27 01:01:11.866370 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-27 01:01:11.866380 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-27 01:01:11.866390 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-27 01:01:11.866399 | orchestrator | 2026-02-27 01:01:11.866408 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-02-27 01:01:11.866418 | orchestrator | Friday 27 February 2026 00:58:21 +0000 (0:00:00.718) 0:00:02.279 ******* 2026-02-27 01:01:11.866431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-27 01:01:11.866445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-27 01:01:11.866480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-27 01:01:11.866494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-27 01:01:11.866515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-27 01:01:11.866529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-27 01:01:11.866542 | orchestrator | 2026-02-27 01:01:11.866553 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-27 01:01:11.866570 | orchestrator | Friday 27 February 2026 00:58:22 +0000 (0:00:01.852) 0:00:04.132 ******* 2026-02-27 01:01:11.866581 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:01:11.866593 | orchestrator | 2026-02-27 01:01:11.866604 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-02-27 01:01:11.866616 | orchestrator | Friday 27 February 2026 00:58:23 +0000 (0:00:00.616) 0:00:04.748 ******* 2026-02-27 01:01:11.866639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-27 01:01:11.866660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-27 01:01:11.866672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-27 01:01:11.866685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-27 01:01:11.866710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-27 01:01:11.866729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-27 01:01:11.866741 | orchestrator | 2026-02-27 01:01:11.866752 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-02-27 01:01:11.866763 | orchestrator | Friday 27 February 2026 00:58:26 +0000 (0:00:03.220) 0:00:07.969 ******* 2026-02-27 01:01:11.866775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-27 01:01:11.866793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-27 01:01:11.866807 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:01:11.866825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-27 01:01:11.866843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-27 01:01:11.866855 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:01:11.866867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-27 01:01:11.866879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-27 01:01:11.866890 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:01:11.866901 | orchestrator | 2026-02-27 01:01:11.866912 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-02-27 01:01:11.866926 | orchestrator | Friday 27 February 2026 00:58:28 +0000 (0:00:01.398) 0:00:09.367 ******* 2026-02-27 01:01:11.866942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-27 01:01:11.866958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-27 01:01:11.866969 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:01:11.866979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-27 01:01:11.866989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-27 01:01:11.867000 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:01:11.867018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-27 01:01:11.867039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-27 01:01:11.867050 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:01:11.867060 | orchestrator | 2026-02-27 01:01:11.867070 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-02-27 01:01:11.867080 | orchestrator | Friday 27 February 2026 00:58:29 +0000 (0:00:01.025) 0:00:10.393 ******* 2026-02-27 01:01:11.867090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-27 01:01:11.867100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-27 01:01:11.867115 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-27 01:01:11.867144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-27 01:01:11.867155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-27 01:01:11.867166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-27 01:01:11.867183 | orchestrator | 2026-02-27 01:01:11.867192 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-02-27 01:01:11.867202 | orchestrator | Friday 27 February 2026 00:58:32 +0000 (0:00:02.952) 0:00:13.346 ******* 2026-02-27 01:01:11.867211 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:01:11.867221 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:01:11.867230 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:01:11.867260 | orchestrator | 2026-02-27 01:01:11.867270 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-02-27 01:01:11.867279 | orchestrator | Friday 27 February 2026 00:58:36 +0000 (0:00:03.997) 0:00:17.343 ******* 2026-02-27 01:01:11.867293 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:01:11.867303 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:01:11.867313 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:01:11.867322 | orchestrator | 2026-02-27 01:01:11.867332 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-02-27 01:01:11.867341 | orchestrator | Friday 27 February 2026 00:58:38 +0000 (0:00:02.196) 0:00:19.540 ******* 2026-02-27 01:01:11.867359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-27 01:01:11.867370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-27 01:01:11.867380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-27 01:01:11.867391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-27 01:01:11.867418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-27 01:01:11.867430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-27 01:01:11.867440 | orchestrator | 2026-02-27 01:01:11.867450 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-27 01:01:11.867460 | orchestrator | Friday 27 February 2026 00:58:40 +0000 (0:00:02.319) 0:00:21.859 ******* 2026-02-27 01:01:11.867470 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:01:11.867480 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:01:11.867489 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:01:11.867498 | orchestrator | 2026-02-27 01:01:11.867508 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-27 01:01:11.867518 | orchestrator | Friday 27 February 2026 00:58:41 +0000 (0:00:00.375) 0:00:22.234 ******* 2026-02-27 01:01:11.867527 | orchestrator | 2026-02-27 01:01:11.867537 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-27 01:01:11.867547 | orchestrator | Friday 27 February 2026 00:58:41 +0000 (0:00:00.084) 0:00:22.319 ******* 2026-02-27 01:01:11.867556 | orchestrator | 2026-02-27 01:01:11.867566 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-27 01:01:11.867583 | orchestrator | Friday 27 February 2026 00:58:41 +0000 (0:00:00.081) 0:00:22.401 ******* 2026-02-27 01:01:11.867592 | orchestrator | 2026-02-27 01:01:11.867602 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-02-27 01:01:11.867611 | orchestrator | Friday 27 February 2026 00:58:41 +0000 (0:00:00.068) 0:00:22.470 ******* 2026-02-27 01:01:11.867621 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:01:11.867630 | orchestrator | 2026-02-27 01:01:11.867640 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-02-27 01:01:11.867649 | orchestrator | Friday 27 February 2026 00:58:41 +0000 (0:00:00.706) 0:00:23.176 ******* 2026-02-27 01:01:11.867659 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:01:11.867668 | orchestrator | 2026-02-27 01:01:11.867678 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-02-27 01:01:11.867688 | orchestrator | Friday 27 February 2026 00:58:42 +0000 (0:00:00.211) 0:00:23.388 ******* 2026-02-27 01:01:11.867697 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:01:11.867707 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:01:11.867717 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:01:11.867726 | orchestrator | 2026-02-27 01:01:11.867735 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-02-27 01:01:11.867745 | orchestrator | Friday 27 February 2026 00:59:36 +0000 (0:00:54.397) 0:01:17.785 ******* 2026-02-27 01:01:11.867754 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:01:11.867764 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:01:11.867786 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:01:11.867807 | orchestrator | 2026-02-27 01:01:11.867817 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-27 01:01:11.867827 | orchestrator | Friday 27 February 2026 01:00:58 +0000 (0:01:21.712) 0:02:39.498 ******* 2026-02-27 01:01:11.867836 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:01:11.867846 | orchestrator | 2026-02-27 01:01:11.867855 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-02-27 01:01:11.867869 | orchestrator | Friday 27 February 2026 01:00:59 +0000 (0:00:00.842) 0:02:40.341 ******* 2026-02-27 01:01:11.867879 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:01:11.867888 | orchestrator | 2026-02-27 01:01:11.867898 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-02-27 01:01:11.867907 | orchestrator | Friday 27 February 2026 01:01:01 +0000 (0:00:02.672) 0:02:43.014 ******* 2026-02-27 01:01:11.867917 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:01:11.867926 | orchestrator | 2026-02-27 01:01:11.867936 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-02-27 01:01:11.867945 | orchestrator | Friday 27 February 2026 01:01:04 +0000 (0:00:02.365) 0:02:45.379 ******* 2026-02-27 01:01:11.867955 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:01:11.867965 | orchestrator | 2026-02-27 01:01:11.867974 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-02-27 01:01:11.867984 | orchestrator | Friday 27 February 2026 01:01:06 +0000 (0:00:02.721) 0:02:48.100 ******* 2026-02-27 01:01:11.867993 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:01:11.868003 | orchestrator | 2026-02-27 01:01:11.868018 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 01:01:11.868029 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-27 01:01:11.868039 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-27 01:01:11.868049 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-27 01:01:11.868059 | orchestrator | 2026-02-27 01:01:11.868068 | orchestrator | 2026-02-27 01:01:11.868078 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 01:01:11.868093 | orchestrator | Friday 27 February 2026 01:01:09 +0000 (0:00:02.752) 0:02:50.853 ******* 2026-02-27 01:01:11.868103 | orchestrator | =============================================================================== 2026-02-27 01:01:11.868112 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 81.71s 2026-02-27 01:01:11.868122 | orchestrator | opensearch : Restart opensearch container ------------------------------ 54.40s 2026-02-27 01:01:11.868132 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 4.00s 2026-02-27 01:01:11.868141 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.22s 2026-02-27 01:01:11.868152 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.95s 2026-02-27 01:01:11.868168 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.75s 2026-02-27 01:01:11.868183 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.72s 2026-02-27 01:01:11.868199 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.67s 2026-02-27 01:01:11.868216 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.37s 2026-02-27 01:01:11.868231 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.32s 2026-02-27 01:01:11.868279 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.20s 2026-02-27 01:01:11.868294 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.85s 2026-02-27 01:01:11.868309 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.40s 2026-02-27 01:01:11.868325 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.03s 2026-02-27 01:01:11.868340 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.84s 2026-02-27 01:01:11.868356 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.72s 2026-02-27 01:01:11.868370 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 0.71s 2026-02-27 01:01:11.868385 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.62s 2026-02-27 01:01:11.868400 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.52s 2026-02-27 01:01:11.868415 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.47s 2026-02-27 01:01:11.868430 | orchestrator | 2026-02-27 01:01:11 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 01:01:11.868446 | orchestrator | 2026-02-27 01:01:11 | INFO  | Task 1736d673-bee3-4024-bd6f-ebda106f77ef is in state STARTED 2026-02-27 01:01:11.868462 | orchestrator | 2026-02-27 01:01:11 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:01:14.919049 | orchestrator | 2026-02-27 01:01:14 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 01:01:14.921916 | orchestrator | 2026-02-27 01:01:14 | INFO  | Task 1736d673-bee3-4024-bd6f-ebda106f77ef is in state STARTED 2026-02-27 01:01:14.922530 | orchestrator | 2026-02-27 01:01:14 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:01:17.970940 | orchestrator | 2026-02-27 01:01:17 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 01:01:17.972201 | orchestrator | 2026-02-27 01:01:17 | INFO  | Task 1736d673-bee3-4024-bd6f-ebda106f77ef is in state STARTED 2026-02-27 01:01:17.972333 | orchestrator | 2026-02-27 01:01:17 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:01:21.021373 | orchestrator | 2026-02-27 01:01:21 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 01:01:21.023317 | orchestrator | 2026-02-27 01:01:21 | INFO  | Task 1736d673-bee3-4024-bd6f-ebda106f77ef is in state STARTED 2026-02-27 01:01:21.023349 | orchestrator | 2026-02-27 01:01:21 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:01:24.077006 | orchestrator | 2026-02-27 01:01:24 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 01:01:24.077108 | orchestrator | 2026-02-27 01:01:24 | INFO  | Task 1736d673-bee3-4024-bd6f-ebda106f77ef is in state STARTED 2026-02-27 01:01:24.077607 | orchestrator | 2026-02-27 01:01:24 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:01:27.121719 | orchestrator | 2026-02-27 01:01:27 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 01:01:27.124459 | orchestrator | 2026-02-27 01:01:27 | INFO  | Task 1736d673-bee3-4024-bd6f-ebda106f77ef is in state STARTED 2026-02-27 01:01:27.124575 | orchestrator | 2026-02-27 01:01:27 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:01:30.175503 | orchestrator | 2026-02-27 01:01:30 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state STARTED 2026-02-27 01:01:30.177324 | orchestrator | 2026-02-27 01:01:30 | INFO  | Task 1736d673-bee3-4024-bd6f-ebda106f77ef is in state STARTED 2026-02-27 01:01:30.177421 | orchestrator | 2026-02-27 01:01:30 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:01:33.224522 | orchestrator | 2026-02-27 01:01:33 | INFO  | Task 8dd899fb-d6f8-4268-ba0b-65b7c94262dd is in state SUCCESS 2026-02-27 01:01:33.225400 | orchestrator | 2026-02-27 01:01:33.225444 | orchestrator | 2026-02-27 01:01:33.225456 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-02-27 01:01:33.225466 | orchestrator | 2026-02-27 01:01:33.225475 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-02-27 01:01:33.225484 | orchestrator | Friday 27 February 2026 00:58:18 +0000 (0:00:00.093) 0:00:00.093 ******* 2026-02-27 01:01:33.225493 | orchestrator | ok: [localhost] => { 2026-02-27 01:01:33.225504 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-02-27 01:01:33.225513 | orchestrator | } 2026-02-27 01:01:33.225522 | orchestrator | 2026-02-27 01:01:33.225531 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-02-27 01:01:33.225540 | orchestrator | Friday 27 February 2026 00:58:19 +0000 (0:00:00.070) 0:00:00.163 ******* 2026-02-27 01:01:33.225549 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-02-27 01:01:33.225560 | orchestrator | ...ignoring 2026-02-27 01:01:33.225569 | orchestrator | 2026-02-27 01:01:33.225578 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-02-27 01:01:33.225586 | orchestrator | Friday 27 February 2026 00:58:21 +0000 (0:00:02.910) 0:00:03.074 ******* 2026-02-27 01:01:33.225595 | orchestrator | skipping: [localhost] 2026-02-27 01:01:33.225604 | orchestrator | 2026-02-27 01:01:33.225612 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-02-27 01:01:33.225621 | orchestrator | Friday 27 February 2026 00:58:21 +0000 (0:00:00.065) 0:00:03.140 ******* 2026-02-27 01:01:33.225816 | orchestrator | ok: [localhost] 2026-02-27 01:01:33.225826 | orchestrator | 2026-02-27 01:01:33.225835 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-27 01:01:33.225844 | orchestrator | 2026-02-27 01:01:33.225852 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-27 01:01:33.225861 | orchestrator | Friday 27 February 2026 00:58:22 +0000 (0:00:00.290) 0:00:03.430 ******* 2026-02-27 01:01:33.225870 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:01:33.225879 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:01:33.225887 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:01:33.225896 | orchestrator | 2026-02-27 01:01:33.225905 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-27 01:01:33.225914 | orchestrator | Friday 27 February 2026 00:58:22 +0000 (0:00:00.358) 0:00:03.789 ******* 2026-02-27 01:01:33.225945 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-27 01:01:33.225956 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-27 01:01:33.225965 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-27 01:01:33.225974 | orchestrator | 2026-02-27 01:01:33.225982 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-27 01:01:33.225991 | orchestrator | 2026-02-27 01:01:33.226000 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-27 01:01:33.226009 | orchestrator | Friday 27 February 2026 00:58:23 +0000 (0:00:00.666) 0:00:04.456 ******* 2026-02-27 01:01:33.226064 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-27 01:01:33.226074 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-27 01:01:33.226082 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-27 01:01:33.226091 | orchestrator | 2026-02-27 01:01:33.226100 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-27 01:01:33.226109 | orchestrator | Friday 27 February 2026 00:58:23 +0000 (0:00:00.396) 0:00:04.852 ******* 2026-02-27 01:01:33.226118 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:01:33.226128 | orchestrator | 2026-02-27 01:01:33.226137 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-02-27 01:01:33.226158 | orchestrator | Friday 27 February 2026 00:58:24 +0000 (0:00:00.685) 0:00:05.538 ******* 2026-02-27 01:01:33.226186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-27 01:01:33.226201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-27 01:01:33.226225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-27 01:01:33.226236 | orchestrator | 2026-02-27 01:01:33.226252 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-02-27 01:01:33.226261 | orchestrator | Friday 27 February 2026 00:58:28 +0000 (0:00:03.650) 0:00:09.189 ******* 2026-02-27 01:01:33.226270 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:01:33.226279 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:01:33.226326 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:01:33.226335 | orchestrator | 2026-02-27 01:01:33.226344 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-02-27 01:01:33.226353 | orchestrator | Friday 27 February 2026 00:58:28 +0000 (0:00:00.801) 0:00:09.990 ******* 2026-02-27 01:01:33.226362 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:01:33.226370 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:01:33.226379 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:01:33.226388 | orchestrator | 2026-02-27 01:01:33.226397 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-02-27 01:01:33.226412 | orchestrator | Friday 27 February 2026 00:58:30 +0000 (0:00:01.886) 0:00:11.877 ******* 2026-02-27 01:01:33.226422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-27 01:01:33.226444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-27 01:01:33.226457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-27 01:01:33.226475 | orchestrator | 2026-02-27 01:01:33.226485 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-02-27 01:01:33.226495 | orchestrator | Friday 27 February 2026 00:58:36 +0000 (0:00:05.339) 0:00:17.217 ******* 2026-02-27 01:01:33.226505 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:01:33.226515 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:01:33.226525 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:01:33.226536 | orchestrator | 2026-02-27 01:01:33.226546 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-02-27 01:01:33.226556 | orchestrator | Friday 27 February 2026 00:58:37 +0000 (0:00:01.167) 0:00:18.384 ******* 2026-02-27 01:01:33.226567 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:01:33.226577 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:01:33.226592 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:01:33.226602 | orchestrator | 2026-02-27 01:01:33.226612 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-27 01:01:33.226623 | orchestrator | Friday 27 February 2026 00:58:41 +0000 (0:00:04.610) 0:00:22.995 ******* 2026-02-27 01:01:33.226633 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:01:33.226643 | orchestrator | 2026-02-27 01:01:33.226653 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-27 01:01:33.226663 | orchestrator | Friday 27 February 2026 00:58:42 +0000 (0:00:00.549) 0:00:23.544 ******* 2026-02-27 01:01:33.226682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-27 01:01:33.226700 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:01:33.226715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-27 01:01:33.226727 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:01:33.226745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-27 01:01:33.226762 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:01:33.226772 | orchestrator | 2026-02-27 01:01:33.226783 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-27 01:01:33.226793 | orchestrator | Friday 27 February 2026 00:58:46 +0000 (0:00:04.385) 0:00:27.930 ******* 2026-02-27 01:01:33.226804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-27 01:01:33.226816 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:01:33.226835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-27 01:01:33.226855 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:01:33.226871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-27 01:01:33.226886 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:01:33.226901 | orchestrator | 2026-02-27 01:01:33.226915 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-27 01:01:33.226929 | orchestrator | Friday 27 February 2026 00:58:50 +0000 (0:00:04.144) 0:00:32.074 ******* 2026-02-27 01:01:33.226958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-27 01:01:33.226990 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:01:33.227005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-27 01:01:33.227019 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:01:33.227040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-27 01:01:33.227081 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:01:33.227095 | orchestrator | 2026-02-27 01:01:33.227109 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-02-27 01:01:33.227123 | orchestrator | Friday 27 February 2026 00:58:54 +0000 (0:00:03.146) 0:00:35.220 ******* 2026-02-27 01:01:33.227148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-27 01:01:33.227172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-27 01:01:33.227211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-27 01:01:33.227229 | orchestrator | 2026-02-27 01:01:33.227243 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-02-27 01:01:33.227258 | orchestrator | Friday 27 February 2026 00:58:57 +0000 (0:00:03.488) 0:00:38.709 ******* 2026-02-27 01:01:33.227272 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:01:33.227312 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:01:33.227327 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:01:33.227342 | orchestrator | 2026-02-27 01:01:33.227356 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-02-27 01:01:33.227371 | orchestrator | Friday 27 February 2026 00:58:58 +0000 (0:00:00.912) 0:00:39.622 ******* 2026-02-27 01:01:33.227387 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:01:33.227402 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:01:33.227418 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:01:33.227436 | orchestrator | 2026-02-27 01:01:33.227454 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-02-27 01:01:33.227472 | orchestrator | Friday 27 February 2026 00:58:58 +0000 (0:00:00.477) 0:00:40.099 ******* 2026-02-27 01:01:33.227491 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:01:33.227509 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:01:33.227525 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:01:33.227536 | orchestrator | 2026-02-27 01:01:33.227547 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-02-27 01:01:33.227558 | orchestrator | Friday 27 February 2026 00:58:59 +0000 (0:00:00.312) 0:00:40.411 ******* 2026-02-27 01:01:33.227572 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-02-27 01:01:33.227591 | orchestrator | ...ignoring 2026-02-27 01:01:33.227632 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-02-27 01:01:33.227652 | orchestrator | ...ignoring 2026-02-27 01:01:33.227671 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-02-27 01:01:33.227689 | orchestrator | ...ignoring 2026-02-27 01:01:33.227707 | orchestrator | 2026-02-27 01:01:33.227725 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-02-27 01:01:33.227744 | orchestrator | Friday 27 February 2026 00:59:10 +0000 (0:00:10.834) 0:00:51.246 ******* 2026-02-27 01:01:33.227760 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:01:33.227776 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:01:33.227793 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:01:33.227829 | orchestrator | 2026-02-27 01:01:33.227848 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-02-27 01:01:33.227866 | orchestrator | Friday 27 February 2026 00:59:10 +0000 (0:00:00.487) 0:00:51.733 ******* 2026-02-27 01:01:33.227884 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:01:33.227904 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:01:33.227922 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:01:33.227939 | orchestrator | 2026-02-27 01:01:33.227957 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-02-27 01:01:33.227976 | orchestrator | Friday 27 February 2026 00:59:11 +0000 (0:00:00.688) 0:00:52.422 ******* 2026-02-27 01:01:33.227993 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:01:33.228012 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:01:33.228030 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:01:33.228049 | orchestrator | 2026-02-27 01:01:33.228066 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-02-27 01:01:33.228085 | orchestrator | Friday 27 February 2026 00:59:11 +0000 (0:00:00.455) 0:00:52.877 ******* 2026-02-27 01:01:33.228103 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:01:33.228122 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:01:33.228140 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:01:33.228158 | orchestrator | 2026-02-27 01:01:33.228179 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-02-27 01:01:33.228253 | orchestrator | Friday 27 February 2026 00:59:12 +0000 (0:00:00.492) 0:00:53.369 ******* 2026-02-27 01:01:33.228640 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:01:33.228682 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:01:33.228693 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:01:33.228705 | orchestrator | 2026-02-27 01:01:33.228716 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-02-27 01:01:33.228727 | orchestrator | Friday 27 February 2026 00:59:12 +0000 (0:00:00.470) 0:00:53.840 ******* 2026-02-27 01:01:33.228738 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:01:33.228749 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:01:33.228760 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:01:33.228771 | orchestrator | 2026-02-27 01:01:33.228787 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-27 01:01:33.228805 | orchestrator | Friday 27 February 2026 00:59:13 +0000 (0:00:00.699) 0:00:54.539 ******* 2026-02-27 01:01:33.228833 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:01:33.228853 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:01:33.228870 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-02-27 01:01:33.228888 | orchestrator | 2026-02-27 01:01:33.228905 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-02-27 01:01:33.228917 | orchestrator | Friday 27 February 2026 00:59:13 +0000 (0:00:00.422) 0:00:54.962 ******* 2026-02-27 01:01:33.228929 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:01:33.228942 | orchestrator | 2026-02-27 01:01:33.228955 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-02-27 01:01:33.228986 | orchestrator | Friday 27 February 2026 00:59:24 +0000 (0:00:10.429) 0:01:05.391 ******* 2026-02-27 01:01:33.229000 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:01:33.229013 | orchestrator | 2026-02-27 01:01:33.229025 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-27 01:01:33.229038 | orchestrator | Friday 27 February 2026 00:59:24 +0000 (0:00:00.119) 0:01:05.511 ******* 2026-02-27 01:01:33.229052 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:01:33.229061 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:01:33.229069 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:01:33.229077 | orchestrator | 2026-02-27 01:01:33.229084 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-02-27 01:01:33.229092 | orchestrator | Friday 27 February 2026 00:59:25 +0000 (0:00:01.104) 0:01:06.616 ******* 2026-02-27 01:01:33.229100 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:01:33.229108 | orchestrator | 2026-02-27 01:01:33.229116 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-02-27 01:01:33.229124 | orchestrator | Friday 27 February 2026 00:59:33 +0000 (0:00:08.379) 0:01:14.996 ******* 2026-02-27 01:01:33.229131 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:01:33.229139 | orchestrator | 2026-02-27 01:01:33.229147 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-02-27 01:01:33.229156 | orchestrator | Friday 27 February 2026 00:59:35 +0000 (0:00:01.775) 0:01:16.771 ******* 2026-02-27 01:01:33.229164 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:01:33.229171 | orchestrator | 2026-02-27 01:01:33.229179 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-02-27 01:01:33.229187 | orchestrator | Friday 27 February 2026 00:59:39 +0000 (0:00:03.491) 0:01:20.263 ******* 2026-02-27 01:01:33.229195 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:01:33.229203 | orchestrator | 2026-02-27 01:01:33.229211 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-02-27 01:01:33.229218 | orchestrator | Friday 27 February 2026 00:59:39 +0000 (0:00:00.151) 0:01:20.414 ******* 2026-02-27 01:01:33.229226 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:01:33.229234 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:01:33.229244 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:01:33.229253 | orchestrator | 2026-02-27 01:01:33.229263 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-02-27 01:01:33.229280 | orchestrator | Friday 27 February 2026 00:59:39 +0000 (0:00:00.362) 0:01:20.776 ******* 2026-02-27 01:01:33.229312 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:01:33.229322 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-02-27 01:01:33.229331 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:01:33.229341 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:01:33.229350 | orchestrator | 2026-02-27 01:01:33.229359 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-27 01:01:33.229368 | orchestrator | skipping: no hosts matched 2026-02-27 01:01:33.229378 | orchestrator | 2026-02-27 01:01:33.229387 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-27 01:01:33.229396 | orchestrator | 2026-02-27 01:01:33.229405 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-27 01:01:33.229415 | orchestrator | Friday 27 February 2026 00:59:40 +0000 (0:00:00.643) 0:01:21.419 ******* 2026-02-27 01:01:33.229424 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:01:33.229433 | orchestrator | 2026-02-27 01:01:33.229442 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-27 01:01:33.229452 | orchestrator | Friday 27 February 2026 00:59:59 +0000 (0:00:19.289) 0:01:40.709 ******* 2026-02-27 01:01:33.229461 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:01:33.229470 | orchestrator | 2026-02-27 01:01:33.229480 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-27 01:01:33.229489 | orchestrator | Friday 27 February 2026 01:00:15 +0000 (0:00:15.655) 0:01:56.364 ******* 2026-02-27 01:01:33.229503 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:01:33.229511 | orchestrator | 2026-02-27 01:01:33.229519 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-27 01:01:33.229527 | orchestrator | 2026-02-27 01:01:33.229535 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-27 01:01:33.229542 | orchestrator | Friday 27 February 2026 01:00:17 +0000 (0:00:02.635) 0:01:58.999 ******* 2026-02-27 01:01:33.229550 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:01:33.229558 | orchestrator | 2026-02-27 01:01:33.229566 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-27 01:01:33.229586 | orchestrator | Friday 27 February 2026 01:00:36 +0000 (0:00:19.000) 0:02:18.000 ******* 2026-02-27 01:01:33.229595 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:01:33.229602 | orchestrator | 2026-02-27 01:01:33.229610 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-27 01:01:33.229618 | orchestrator | Friday 27 February 2026 01:00:52 +0000 (0:00:15.615) 0:02:33.615 ******* 2026-02-27 01:01:33.229626 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:01:33.229633 | orchestrator | 2026-02-27 01:01:33.229641 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-27 01:01:33.229649 | orchestrator | 2026-02-27 01:01:33.229657 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-27 01:01:33.229665 | orchestrator | Friday 27 February 2026 01:00:55 +0000 (0:00:02.655) 0:02:36.270 ******* 2026-02-27 01:01:33.229673 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:01:33.229681 | orchestrator | 2026-02-27 01:01:33.229688 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-27 01:01:33.229696 | orchestrator | Friday 27 February 2026 01:01:08 +0000 (0:00:13.132) 0:02:49.403 ******* 2026-02-27 01:01:33.229704 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:01:33.229712 | orchestrator | 2026-02-27 01:01:33.229720 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-27 01:01:33.229727 | orchestrator | Friday 27 February 2026 01:01:12 +0000 (0:00:04.673) 0:02:54.076 ******* 2026-02-27 01:01:33.229735 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:01:33.229743 | orchestrator | 2026-02-27 01:01:33.229751 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-27 01:01:33.229759 | orchestrator | 2026-02-27 01:01:33.229767 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-27 01:01:33.229774 | orchestrator | Friday 27 February 2026 01:01:16 +0000 (0:00:03.173) 0:02:57.250 ******* 2026-02-27 01:01:33.229782 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:01:33.229790 | orchestrator | 2026-02-27 01:01:33.229798 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-02-27 01:01:33.229806 | orchestrator | Friday 27 February 2026 01:01:16 +0000 (0:00:00.533) 0:02:57.783 ******* 2026-02-27 01:01:33.229814 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:01:33.229822 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:01:33.229829 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:01:33.229837 | orchestrator | 2026-02-27 01:01:33.229845 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-02-27 01:01:33.229855 | orchestrator | Friday 27 February 2026 01:01:19 +0000 (0:00:02.625) 0:03:00.409 ******* 2026-02-27 01:01:33.229868 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:01:33.229881 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:01:33.229894 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:01:33.229906 | orchestrator | 2026-02-27 01:01:33.229919 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-02-27 01:01:33.229932 | orchestrator | Friday 27 February 2026 01:01:21 +0000 (0:00:02.394) 0:03:02.804 ******* 2026-02-27 01:01:33.229946 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:01:33.229959 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:01:33.229971 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:01:33.229992 | orchestrator | 2026-02-27 01:01:33.230005 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-02-27 01:01:33.230044 | orchestrator | Friday 27 February 2026 01:01:24 +0000 (0:00:02.403) 0:03:05.207 ******* 2026-02-27 01:01:33.230055 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:01:33.230062 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:01:33.230070 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:01:33.230078 | orchestrator | 2026-02-27 01:01:33.230086 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-27 01:01:33.230094 | orchestrator | Friday 27 February 2026 01:01:26 +0000 (0:00:02.372) 0:03:07.580 ******* 2026-02-27 01:01:33.230102 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:01:33.230110 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:01:33.230118 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:01:33.230125 | orchestrator | 2026-02-27 01:01:33.230138 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-27 01:01:33.230146 | orchestrator | Friday 27 February 2026 01:01:29 +0000 (0:00:03.432) 0:03:11.012 ******* 2026-02-27 01:01:33.230154 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:01:33.230162 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:01:33.230170 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:01:33.230178 | orchestrator | 2026-02-27 01:01:33.230186 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 01:01:33.230194 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-02-27 01:01:33.230204 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-02-27 01:01:33.230213 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-02-27 01:01:33.230221 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-02-27 01:01:33.230229 | orchestrator | 2026-02-27 01:01:33.230237 | orchestrator | 2026-02-27 01:01:33.230244 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 01:01:33.230252 | orchestrator | Friday 27 February 2026 01:01:30 +0000 (0:00:00.246) 0:03:11.259 ******* 2026-02-27 01:01:33.230260 | orchestrator | =============================================================================== 2026-02-27 01:01:33.230268 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 38.29s 2026-02-27 01:01:33.230276 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 31.27s 2026-02-27 01:01:33.230324 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 13.13s 2026-02-27 01:01:33.230333 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.83s 2026-02-27 01:01:33.230341 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.43s 2026-02-27 01:01:33.230349 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.38s 2026-02-27 01:01:33.230357 | orchestrator | mariadb : Copying over config.json files for services ------------------- 5.34s 2026-02-27 01:01:33.230365 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.29s 2026-02-27 01:01:33.230372 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.67s 2026-02-27 01:01:33.230380 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.61s 2026-02-27 01:01:33.230388 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 4.39s 2026-02-27 01:01:33.230396 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 4.14s 2026-02-27 01:01:33.230403 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.65s 2026-02-27 01:01:33.230417 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 3.49s 2026-02-27 01:01:33.230425 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.49s 2026-02-27 01:01:33.230433 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.43s 2026-02-27 01:01:33.230440 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 3.17s 2026-02-27 01:01:33.230448 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.15s 2026-02-27 01:01:33.230456 | orchestrator | Check MariaDB service --------------------------------------------------- 2.91s 2026-02-27 01:01:33.230464 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.63s 2026-02-27 01:01:33.230472 | orchestrator | 2026-02-27 01:01:33 | INFO  | Task 6595c2dc-cd7e-4585-ba30-13712dc7b670 is in state STARTED 2026-02-27 01:01:33.230761 | orchestrator | 2026-02-27 01:01:33 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:01:33.233350 | orchestrator | 2026-02-27 01:01:33 | INFO  | Task 1736d673-bee3-4024-bd6f-ebda106f77ef is in state STARTED 2026-02-27 01:01:33.233551 | orchestrator | 2026-02-27 01:01:33 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:01:36.302385 | orchestrator | 2026-02-27 01:01:36 | INFO  | Task 6595c2dc-cd7e-4585-ba30-13712dc7b670 is in state STARTED 2026-02-27 01:01:36.304070 | orchestrator | 2026-02-27 01:01:36 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:01:36.306517 | orchestrator | 2026-02-27 01:01:36 | INFO  | Task 1736d673-bee3-4024-bd6f-ebda106f77ef is in state STARTED 2026-02-27 01:01:36.306743 | orchestrator | 2026-02-27 01:01:36 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:01:39.352085 | orchestrator | 2026-02-27 01:01:39 | INFO  | Task 6595c2dc-cd7e-4585-ba30-13712dc7b670 is in state STARTED 2026-02-27 01:01:39.354532 | orchestrator | 2026-02-27 01:01:39 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:01:39.355910 | orchestrator | 2026-02-27 01:01:39 | INFO  | Task 1736d673-bee3-4024-bd6f-ebda106f77ef is in state STARTED 2026-02-27 01:01:39.355954 | orchestrator | 2026-02-27 01:01:39 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:01:42.408990 | orchestrator | 2026-02-27 01:01:42 | INFO  | Task 6595c2dc-cd7e-4585-ba30-13712dc7b670 is in state STARTED 2026-02-27 01:01:42.412027 | orchestrator | 2026-02-27 01:01:42 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:01:42.414252 | orchestrator | 2026-02-27 01:01:42 | INFO  | Task 1736d673-bee3-4024-bd6f-ebda106f77ef is in state STARTED 2026-02-27 01:01:42.414378 | orchestrator | 2026-02-27 01:01:42 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:01:45.457204 | orchestrator | 2026-02-27 01:01:45 | INFO  | Task 6595c2dc-cd7e-4585-ba30-13712dc7b670 is in state STARTED 2026-02-27 01:01:45.458477 | orchestrator | 2026-02-27 01:01:45 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:01:45.462383 | orchestrator | 2026-02-27 01:01:45 | INFO  | Task 1736d673-bee3-4024-bd6f-ebda106f77ef is in state STARTED 2026-02-27 01:01:45.462439 | orchestrator | 2026-02-27 01:01:45 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:01:48.511464 | orchestrator | 2026-02-27 01:01:48 | INFO  | Task 6595c2dc-cd7e-4585-ba30-13712dc7b670 is in state STARTED 2026-02-27 01:01:48.516963 | orchestrator | 2026-02-27 01:01:48 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:01:48.520295 | orchestrator | 2026-02-27 01:01:48 | INFO  | Task 1736d673-bee3-4024-bd6f-ebda106f77ef is in state STARTED 2026-02-27 01:01:48.521108 | orchestrator | 2026-02-27 01:01:48 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:01:51.557834 | orchestrator | 2026-02-27 01:01:51 | INFO  | Task 6595c2dc-cd7e-4585-ba30-13712dc7b670 is in state STARTED 2026-02-27 01:01:51.559555 | orchestrator | 2026-02-27 01:01:51 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:01:51.562260 | orchestrator | 2026-02-27 01:01:51 | INFO  | Task 1736d673-bee3-4024-bd6f-ebda106f77ef is in state STARTED 2026-02-27 01:01:51.562376 | orchestrator | 2026-02-27 01:01:51 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:01:54.607007 | orchestrator | 2026-02-27 01:01:54 | INFO  | Task 6595c2dc-cd7e-4585-ba30-13712dc7b670 is in state STARTED 2026-02-27 01:01:54.612258 | orchestrator | 2026-02-27 01:01:54 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:01:54.615274 | orchestrator | 2026-02-27 01:01:54 | INFO  | Task 1736d673-bee3-4024-bd6f-ebda106f77ef is in state STARTED 2026-02-27 01:01:54.615697 | orchestrator | 2026-02-27 01:01:54 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:01:57.662845 | orchestrator | 2026-02-27 01:01:57 | INFO  | Task 6595c2dc-cd7e-4585-ba30-13712dc7b670 is in state STARTED 2026-02-27 01:01:57.663206 | orchestrator | 2026-02-27 01:01:57 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:01:57.664272 | orchestrator | 2026-02-27 01:01:57 | INFO  | Task 1736d673-bee3-4024-bd6f-ebda106f77ef is in state STARTED 2026-02-27 01:01:57.664330 | orchestrator | 2026-02-27 01:01:57 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:02:00.717233 | orchestrator | 2026-02-27 01:02:00 | INFO  | Task 6595c2dc-cd7e-4585-ba30-13712dc7b670 is in state STARTED 2026-02-27 01:02:00.718606 | orchestrator | 2026-02-27 01:02:00 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:02:00.722107 | orchestrator | 2026-02-27 01:02:00 | INFO  | Task 1736d673-bee3-4024-bd6f-ebda106f77ef is in state STARTED 2026-02-27 01:02:00.722191 | orchestrator | 2026-02-27 01:02:00 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:02:03.758964 | orchestrator | 2026-02-27 01:02:03 | INFO  | Task 6595c2dc-cd7e-4585-ba30-13712dc7b670 is in state STARTED 2026-02-27 01:02:03.760637 | orchestrator | 2026-02-27 01:02:03 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:02:03.762934 | orchestrator | 2026-02-27 01:02:03 | INFO  | Task 1736d673-bee3-4024-bd6f-ebda106f77ef is in state STARTED 2026-02-27 01:02:03.763602 | orchestrator | 2026-02-27 01:02:03 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:02:06.804296 | orchestrator | 2026-02-27 01:02:06 | INFO  | Task 6595c2dc-cd7e-4585-ba30-13712dc7b670 is in state STARTED 2026-02-27 01:02:06.806708 | orchestrator | 2026-02-27 01:02:06 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:02:06.809412 | orchestrator | 2026-02-27 01:02:06 | INFO  | Task 1736d673-bee3-4024-bd6f-ebda106f77ef is in state STARTED 2026-02-27 01:02:06.809476 | orchestrator | 2026-02-27 01:02:06 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:02:09.867221 | orchestrator | 2026-02-27 01:02:09 | INFO  | Task 6595c2dc-cd7e-4585-ba30-13712dc7b670 is in state STARTED 2026-02-27 01:02:09.869150 | orchestrator | 2026-02-27 01:02:09 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:02:09.870532 | orchestrator | 2026-02-27 01:02:09 | INFO  | Task 1736d673-bee3-4024-bd6f-ebda106f77ef is in state STARTED 2026-02-27 01:02:09.870888 | orchestrator | 2026-02-27 01:02:09 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:02:12.921458 | orchestrator | 2026-02-27 01:02:12 | INFO  | Task 6595c2dc-cd7e-4585-ba30-13712dc7b670 is in state STARTED 2026-02-27 01:02:12.923596 | orchestrator | 2026-02-27 01:02:12 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:02:12.925224 | orchestrator | 2026-02-27 01:02:12 | INFO  | Task 1736d673-bee3-4024-bd6f-ebda106f77ef is in state STARTED 2026-02-27 01:02:12.925608 | orchestrator | 2026-02-27 01:02:12 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:02:15.972052 | orchestrator | 2026-02-27 01:02:15 | INFO  | Task 6595c2dc-cd7e-4585-ba30-13712dc7b670 is in state STARTED 2026-02-27 01:02:15.973354 | orchestrator | 2026-02-27 01:02:15 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:02:15.975408 | orchestrator | 2026-02-27 01:02:15 | INFO  | Task 1736d673-bee3-4024-bd6f-ebda106f77ef is in state STARTED 2026-02-27 01:02:15.975484 | orchestrator | 2026-02-27 01:02:15 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:02:19.026522 | orchestrator | 2026-02-27 01:02:19 | INFO  | Task 6595c2dc-cd7e-4585-ba30-13712dc7b670 is in state STARTED 2026-02-27 01:02:19.027923 | orchestrator | 2026-02-27 01:02:19 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:02:19.029810 | orchestrator | 2026-02-27 01:02:19 | INFO  | Task 1736d673-bee3-4024-bd6f-ebda106f77ef is in state STARTED 2026-02-27 01:02:19.029862 | orchestrator | 2026-02-27 01:02:19 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:02:22.076620 | orchestrator | 2026-02-27 01:02:22 | INFO  | Task 6595c2dc-cd7e-4585-ba30-13712dc7b670 is in state STARTED 2026-02-27 01:02:22.078537 | orchestrator | 2026-02-27 01:02:22 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:02:22.080001 | orchestrator | 2026-02-27 01:02:22 | INFO  | Task 1736d673-bee3-4024-bd6f-ebda106f77ef is in state STARTED 2026-02-27 01:02:22.080043 | orchestrator | 2026-02-27 01:02:22 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:02:25.134312 | orchestrator | 2026-02-27 01:02:25 | INFO  | Task 6595c2dc-cd7e-4585-ba30-13712dc7b670 is in state STARTED 2026-02-27 01:02:25.136257 | orchestrator | 2026-02-27 01:02:25 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:02:25.137955 | orchestrator | 2026-02-27 01:02:25 | INFO  | Task 1736d673-bee3-4024-bd6f-ebda106f77ef is in state STARTED 2026-02-27 01:02:25.138145 | orchestrator | 2026-02-27 01:02:25 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:02:28.185346 | orchestrator | 2026-02-27 01:02:28 | INFO  | Task 6595c2dc-cd7e-4585-ba30-13712dc7b670 is in state STARTED 2026-02-27 01:02:28.185467 | orchestrator | 2026-02-27 01:02:28 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:02:28.186228 | orchestrator | 2026-02-27 01:02:28 | INFO  | Task 1736d673-bee3-4024-bd6f-ebda106f77ef is in state STARTED 2026-02-27 01:02:28.186315 | orchestrator | 2026-02-27 01:02:28 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:02:31.223297 | orchestrator | 2026-02-27 01:02:31 | INFO  | Task 6595c2dc-cd7e-4585-ba30-13712dc7b670 is in state STARTED 2026-02-27 01:02:31.224262 | orchestrator | 2026-02-27 01:02:31 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:02:31.226815 | orchestrator | 2026-02-27 01:02:31 | INFO  | Task 1736d673-bee3-4024-bd6f-ebda106f77ef is in state STARTED 2026-02-27 01:02:31.226871 | orchestrator | 2026-02-27 01:02:31 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:02:34.278927 | orchestrator | 2026-02-27 01:02:34 | INFO  | Task 6595c2dc-cd7e-4585-ba30-13712dc7b670 is in state STARTED 2026-02-27 01:02:34.281839 | orchestrator | 2026-02-27 01:02:34 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:02:34.284066 | orchestrator | 2026-02-27 01:02:34 | INFO  | Task 1736d673-bee3-4024-bd6f-ebda106f77ef is in state STARTED 2026-02-27 01:02:34.284121 | orchestrator | 2026-02-27 01:02:34 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:02:37.332744 | orchestrator | 2026-02-27 01:02:37 | INFO  | Task 6595c2dc-cd7e-4585-ba30-13712dc7b670 is in state STARTED 2026-02-27 01:02:37.334315 | orchestrator | 2026-02-27 01:02:37 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:02:37.336316 | orchestrator | 2026-02-27 01:02:37 | INFO  | Task 1736d673-bee3-4024-bd6f-ebda106f77ef is in state STARTED 2026-02-27 01:02:37.336372 | orchestrator | 2026-02-27 01:02:37 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:02:40.388011 | orchestrator | 2026-02-27 01:02:40 | INFO  | Task 6595c2dc-cd7e-4585-ba30-13712dc7b670 is in state STARTED 2026-02-27 01:02:40.389660 | orchestrator | 2026-02-27 01:02:40 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:02:40.391760 | orchestrator | 2026-02-27 01:02:40 | INFO  | Task 1736d673-bee3-4024-bd6f-ebda106f77ef is in state STARTED 2026-02-27 01:02:40.391841 | orchestrator | 2026-02-27 01:02:40 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:02:43.442233 | orchestrator | 2026-02-27 01:02:43 | INFO  | Task 6595c2dc-cd7e-4585-ba30-13712dc7b670 is in state STARTED 2026-02-27 01:02:43.442307 | orchestrator | 2026-02-27 01:02:43 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:02:43.445839 | orchestrator | 2026-02-27 01:02:43 | INFO  | Task 1736d673-bee3-4024-bd6f-ebda106f77ef is in state STARTED 2026-02-27 01:02:43.447028 | orchestrator | 2026-02-27 01:02:43 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:02:46.511188 | orchestrator | 2026-02-27 01:02:46 | INFO  | Task 6595c2dc-cd7e-4585-ba30-13712dc7b670 is in state STARTED 2026-02-27 01:02:46.513356 | orchestrator | 2026-02-27 01:02:46 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:02:46.515146 | orchestrator | 2026-02-27 01:02:46 | INFO  | Task 1736d673-bee3-4024-bd6f-ebda106f77ef is in state STARTED 2026-02-27 01:02:46.515789 | orchestrator | 2026-02-27 01:02:46 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:02:49.566489 | orchestrator | 2026-02-27 01:02:49 | INFO  | Task 6595c2dc-cd7e-4585-ba30-13712dc7b670 is in state STARTED 2026-02-27 01:02:49.567899 | orchestrator | 2026-02-27 01:02:49 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:02:49.569731 | orchestrator | 2026-02-27 01:02:49 | INFO  | Task 1736d673-bee3-4024-bd6f-ebda106f77ef is in state STARTED 2026-02-27 01:02:49.569910 | orchestrator | 2026-02-27 01:02:49 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:02:52.618319 | orchestrator | 2026-02-27 01:02:52 | INFO  | Task 6595c2dc-cd7e-4585-ba30-13712dc7b670 is in state STARTED 2026-02-27 01:02:52.620609 | orchestrator | 2026-02-27 01:02:52 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:02:52.622505 | orchestrator | 2026-02-27 01:02:52 | INFO  | Task 1736d673-bee3-4024-bd6f-ebda106f77ef is in state STARTED 2026-02-27 01:02:52.622734 | orchestrator | 2026-02-27 01:02:52 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:02:55.676217 | orchestrator | 2026-02-27 01:02:55 | INFO  | Task 6595c2dc-cd7e-4585-ba30-13712dc7b670 is in state STARTED 2026-02-27 01:02:55.676825 | orchestrator | 2026-02-27 01:02:55 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:02:55.678819 | orchestrator | 2026-02-27 01:02:55 | INFO  | Task 1736d673-bee3-4024-bd6f-ebda106f77ef is in state STARTED 2026-02-27 01:02:55.678878 | orchestrator | 2026-02-27 01:02:55 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:02:58.723051 | orchestrator | 2026-02-27 01:02:58 | INFO  | Task 6595c2dc-cd7e-4585-ba30-13712dc7b670 is in state STARTED 2026-02-27 01:02:58.724741 | orchestrator | 2026-02-27 01:02:58 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:02:58.726736 | orchestrator | 2026-02-27 01:02:58 | INFO  | Task 1736d673-bee3-4024-bd6f-ebda106f77ef is in state STARTED 2026-02-27 01:02:58.727616 | orchestrator | 2026-02-27 01:02:58 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:03:01.782282 | orchestrator | 2026-02-27 01:03:01 | INFO  | Task 6595c2dc-cd7e-4585-ba30-13712dc7b670 is in state STARTED 2026-02-27 01:03:01.787287 | orchestrator | 2026-02-27 01:03:01 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:03:01.791677 | orchestrator | 2026-02-27 01:03:01 | INFO  | Task 1736d673-bee3-4024-bd6f-ebda106f77ef is in state SUCCESS 2026-02-27 01:03:01.794262 | orchestrator | 2026-02-27 01:03:01.794346 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-27 01:03:01.794361 | orchestrator | 2.16.14 2026-02-27 01:03:01.794374 | orchestrator | 2026-02-27 01:03:01.794387 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-02-27 01:03:01.794398 | orchestrator | 2026-02-27 01:03:01.794409 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-27 01:03:01.794421 | orchestrator | Friday 27 February 2026 01:00:46 +0000 (0:00:00.648) 0:00:00.648 ******* 2026-02-27 01:03:01.794432 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-27 01:03:01.794444 | orchestrator | 2026-02-27 01:03:01.794456 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-27 01:03:01.794539 | orchestrator | Friday 27 February 2026 01:00:46 +0000 (0:00:00.712) 0:00:01.361 ******* 2026-02-27 01:03:01.794554 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:03:01.794566 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:03:01.794577 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:03:01.794594 | orchestrator | 2026-02-27 01:03:01.794613 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-27 01:03:01.794630 | orchestrator | Friday 27 February 2026 01:00:47 +0000 (0:00:00.618) 0:00:01.980 ******* 2026-02-27 01:03:01.794651 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:03:01.794672 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:03:01.794689 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:03:01.794700 | orchestrator | 2026-02-27 01:03:01.794711 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-27 01:03:01.794722 | orchestrator | Friday 27 February 2026 01:00:47 +0000 (0:00:00.316) 0:00:02.296 ******* 2026-02-27 01:03:01.794733 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:03:01.794744 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:03:01.794754 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:03:01.794765 | orchestrator | 2026-02-27 01:03:01.794776 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-27 01:03:01.794787 | orchestrator | Friday 27 February 2026 01:00:48 +0000 (0:00:00.862) 0:00:03.159 ******* 2026-02-27 01:03:01.794797 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:03:01.794808 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:03:01.794818 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:03:01.794829 | orchestrator | 2026-02-27 01:03:01.794840 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-27 01:03:01.794851 | orchestrator | Friday 27 February 2026 01:00:48 +0000 (0:00:00.318) 0:00:03.477 ******* 2026-02-27 01:03:01.794898 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:03:01.794909 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:03:01.794920 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:03:01.794931 | orchestrator | 2026-02-27 01:03:01.794942 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-27 01:03:01.794953 | orchestrator | Friday 27 February 2026 01:00:49 +0000 (0:00:00.330) 0:00:03.807 ******* 2026-02-27 01:03:01.794964 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:03:01.794975 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:03:01.794985 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:03:01.794996 | orchestrator | 2026-02-27 01:03:01.795007 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-27 01:03:01.795018 | orchestrator | Friday 27 February 2026 01:00:49 +0000 (0:00:00.326) 0:00:04.134 ******* 2026-02-27 01:03:01.795029 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:03:01.795040 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:03:01.795051 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:03:01.795062 | orchestrator | 2026-02-27 01:03:01.795073 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-27 01:03:01.795084 | orchestrator | Friday 27 February 2026 01:00:50 +0000 (0:00:00.538) 0:00:04.673 ******* 2026-02-27 01:03:01.795094 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:03:01.795105 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:03:01.795116 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:03:01.795127 | orchestrator | 2026-02-27 01:03:01.795137 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-27 01:03:01.795148 | orchestrator | Friday 27 February 2026 01:00:50 +0000 (0:00:00.316) 0:00:04.989 ******* 2026-02-27 01:03:01.795159 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-27 01:03:01.795170 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-27 01:03:01.795181 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-27 01:03:01.795192 | orchestrator | 2026-02-27 01:03:01.795203 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-27 01:03:01.795213 | orchestrator | Friday 27 February 2026 01:00:51 +0000 (0:00:00.720) 0:00:05.710 ******* 2026-02-27 01:03:01.795229 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:03:01.795249 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:03:01.795266 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:03:01.795278 | orchestrator | 2026-02-27 01:03:01.795289 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-27 01:03:01.795300 | orchestrator | Friday 27 February 2026 01:00:51 +0000 (0:00:00.485) 0:00:06.196 ******* 2026-02-27 01:03:01.795311 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-27 01:03:01.795322 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-27 01:03:01.795332 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-27 01:03:01.795343 | orchestrator | 2026-02-27 01:03:01.795354 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-27 01:03:01.795365 | orchestrator | Friday 27 February 2026 01:00:53 +0000 (0:00:02.348) 0:00:08.545 ******* 2026-02-27 01:03:01.795376 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-27 01:03:01.795387 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-27 01:03:01.795398 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-27 01:03:01.795423 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:03:01.795434 | orchestrator | 2026-02-27 01:03:01.795510 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-27 01:03:01.795524 | orchestrator | Friday 27 February 2026 01:00:54 +0000 (0:00:00.725) 0:00:09.270 ******* 2026-02-27 01:03:01.795537 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.795564 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.795581 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.795599 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:03:01.795616 | orchestrator | 2026-02-27 01:03:01.795634 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-27 01:03:01.795651 | orchestrator | Friday 27 February 2026 01:00:55 +0000 (0:00:00.947) 0:00:10.217 ******* 2026-02-27 01:03:01.795672 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.795693 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.795713 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.795732 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:03:01.795749 | orchestrator | 2026-02-27 01:03:01.795769 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-27 01:03:01.795782 | orchestrator | Friday 27 February 2026 01:00:56 +0000 (0:00:00.395) 0:00:10.613 ******* 2026-02-27 01:03:01.795795 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '66ff6802ffc8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-27 01:00:52.361949', 'end': '2026-02-27 01:00:52.416701', 'delta': '0:00:00.054752', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['66ff6802ffc8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-27 01:03:01.795812 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'c523cf54e1cf', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-27 01:00:53.200125', 'end': '2026-02-27 01:00:53.246089', 'delta': '0:00:00.045964', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c523cf54e1cf'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-27 01:03:01.795857 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'd68a3058b6cc', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-27 01:00:53.787415', 'end': '2026-02-27 01:00:53.827951', 'delta': '0:00:00.040536', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d68a3058b6cc'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-27 01:03:01.795870 | orchestrator | 2026-02-27 01:03:01.795881 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-27 01:03:01.795892 | orchestrator | Friday 27 February 2026 01:00:56 +0000 (0:00:00.204) 0:00:10.817 ******* 2026-02-27 01:03:01.795903 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:03:01.795914 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:03:01.795925 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:03:01.795936 | orchestrator | 2026-02-27 01:03:01.795947 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-27 01:03:01.795958 | orchestrator | Friday 27 February 2026 01:00:56 +0000 (0:00:00.458) 0:00:11.275 ******* 2026-02-27 01:03:01.795968 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-27 01:03:01.795980 | orchestrator | 2026-02-27 01:03:01.795990 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-27 01:03:01.796001 | orchestrator | Friday 27 February 2026 01:00:58 +0000 (0:00:01.908) 0:00:13.184 ******* 2026-02-27 01:03:01.796012 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:03:01.796023 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:03:01.796034 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:03:01.796045 | orchestrator | 2026-02-27 01:03:01.796056 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-27 01:03:01.796071 | orchestrator | Friday 27 February 2026 01:00:58 +0000 (0:00:00.312) 0:00:13.496 ******* 2026-02-27 01:03:01.796090 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:03:01.796108 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:03:01.796125 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:03:01.796143 | orchestrator | 2026-02-27 01:03:01.796160 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-27 01:03:01.796178 | orchestrator | Friday 27 February 2026 01:00:59 +0000 (0:00:00.453) 0:00:13.950 ******* 2026-02-27 01:03:01.796195 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:03:01.796214 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:03:01.796233 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:03:01.796251 | orchestrator | 2026-02-27 01:03:01.796269 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-27 01:03:01.796288 | orchestrator | Friday 27 February 2026 01:00:59 +0000 (0:00:00.545) 0:00:14.496 ******* 2026-02-27 01:03:01.796306 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:03:01.796324 | orchestrator | 2026-02-27 01:03:01.796335 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-27 01:03:01.796346 | orchestrator | Friday 27 February 2026 01:01:00 +0000 (0:00:00.145) 0:00:14.641 ******* 2026-02-27 01:03:01.796357 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:03:01.796368 | orchestrator | 2026-02-27 01:03:01.796379 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-27 01:03:01.796389 | orchestrator | Friday 27 February 2026 01:01:00 +0000 (0:00:00.248) 0:00:14.890 ******* 2026-02-27 01:03:01.796400 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:03:01.796544 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:03:01.796560 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:03:01.796571 | orchestrator | 2026-02-27 01:03:01.796583 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-27 01:03:01.796593 | orchestrator | Friday 27 February 2026 01:01:00 +0000 (0:00:00.339) 0:00:15.229 ******* 2026-02-27 01:03:01.796604 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:03:01.796615 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:03:01.796626 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:03:01.796637 | orchestrator | 2026-02-27 01:03:01.796648 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-27 01:03:01.796660 | orchestrator | Friday 27 February 2026 01:01:01 +0000 (0:00:00.357) 0:00:15.587 ******* 2026-02-27 01:03:01.796679 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:03:01.796698 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:03:01.796714 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:03:01.796731 | orchestrator | 2026-02-27 01:03:01.796748 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-27 01:03:01.796766 | orchestrator | Friday 27 February 2026 01:01:01 +0000 (0:00:00.539) 0:00:16.127 ******* 2026-02-27 01:03:01.796785 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:03:01.796805 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:03:01.796823 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:03:01.796834 | orchestrator | 2026-02-27 01:03:01.796845 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-27 01:03:01.796856 | orchestrator | Friday 27 February 2026 01:01:01 +0000 (0:00:00.369) 0:00:16.496 ******* 2026-02-27 01:03:01.796867 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:03:01.796877 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:03:01.796888 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:03:01.796899 | orchestrator | 2026-02-27 01:03:01.796910 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-27 01:03:01.796921 | orchestrator | Friday 27 February 2026 01:01:02 +0000 (0:00:00.376) 0:00:16.873 ******* 2026-02-27 01:03:01.796932 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:03:01.796943 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:03:01.796962 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:03:01.796985 | orchestrator | 2026-02-27 01:03:01.796997 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-27 01:03:01.797008 | orchestrator | Friday 27 February 2026 01:01:02 +0000 (0:00:00.317) 0:00:17.190 ******* 2026-02-27 01:03:01.797019 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:03:01.797030 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:03:01.797040 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:03:01.797051 | orchestrator | 2026-02-27 01:03:01.797062 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-27 01:03:01.797073 | orchestrator | Friday 27 February 2026 01:01:03 +0000 (0:00:00.619) 0:00:17.810 ******* 2026-02-27 01:03:01.797086 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c5e6c545--43c0--5a5e--9b6e--24e5d5043e2a-osd--block--c5e6c545--43c0--5a5e--9b6e--24e5d5043e2a', 'dm-uuid-LVM-ktZNB2qrs3DaCnLkAdNHrqYVG23HKb1FGHO1W2U1zR2CbXChmoBj0ctfCoqUzjKf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-27 01:03:01.797099 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--15e091ae--77f4--5dd5--92b2--2aa74778b61d-osd--block--15e091ae--77f4--5dd5--92b2--2aa74778b61d', 'dm-uuid-LVM-qJU288vwWpkc3KXMmYUCJORUt3aDMziKdcrQEt5vLA8Hjbzwqjl8UH3NpNbOBh11'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-27 01:03:01.797120 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:03:01.797132 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:03:01.797144 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:03:01.797155 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:03:01.797167 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:03:01.797190 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:03:01.797202 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:03:01.797213 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:03:01.797229 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3470a12e-124f-400f-8df7-ef48fe544e4b', 'scsi-SQEMU_QEMU_HARDDISK_3470a12e-124f-400f-8df7-ef48fe544e4b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3470a12e-124f-400f-8df7-ef48fe544e4b-part1', 'scsi-SQEMU_QEMU_HARDDISK_3470a12e-124f-400f-8df7-ef48fe544e4b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3470a12e-124f-400f-8df7-ef48fe544e4b-part14', 'scsi-SQEMU_QEMU_HARDDISK_3470a12e-124f-400f-8df7-ef48fe544e4b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3470a12e-124f-400f-8df7-ef48fe544e4b-part15', 'scsi-SQEMU_QEMU_HARDDISK_3470a12e-124f-400f-8df7-ef48fe544e4b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3470a12e-124f-400f-8df7-ef48fe544e4b-part16', 'scsi-SQEMU_QEMU_HARDDISK_3470a12e-124f-400f-8df7-ef48fe544e4b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-27 01:03:01.797253 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c5e6c545--43c0--5a5e--9b6e--24e5d5043e2a-osd--block--c5e6c545--43c0--5a5e--9b6e--24e5d5043e2a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6xG180-8oDB-fzAy-pAEY-lUOZ-L30t-ssoe3i', 'scsi-0QEMU_QEMU_HARDDISK_c4916fb9-2e52-4262-9b09-55f9a233c222', 'scsi-SQEMU_QEMU_HARDDISK_c4916fb9-2e52-4262-9b09-55f9a233c222'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-27 01:03:01.797292 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--15e091ae--77f4--5dd5--92b2--2aa74778b61d-osd--block--15e091ae--77f4--5dd5--92b2--2aa74778b61d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wX9ua3-ujTP-p7s8-wxQz-my6v-aSdV-BlVN7a', 'scsi-0QEMU_QEMU_HARDDISK_31dfd5e5-18cf-471e-b1c7-8ca54ae9145c', 'scsi-SQEMU_QEMU_HARDDISK_31dfd5e5-18cf-471e-b1c7-8ca54ae9145c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-27 01:03:01.797313 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7c486bab-939d-4b28-a8a9-5aea680a535b', 'scsi-SQEMU_QEMU_HARDDISK_7c486bab-939d-4b28-a8a9-5aea680a535b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-27 01:03:01.797342 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--aa250c28--8715--5ad9--8f6a--4b8a4568e8d3-osd--block--aa250c28--8715--5ad9--8f6a--4b8a4568e8d3', 'dm-uuid-LVM-ZkL6ONrrTJ7thuRkFAXmCWJ98Giu8rzf6AyCY1QlpDnyMYhjrremnq2sgAaYdddg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-27 01:03:01.797361 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-27-00-03-18-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-27 01:03:01.797380 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--91c1f24e--fd77--555b--b1fb--5152ae0ce974-osd--block--91c1f24e--fd77--555b--b1fb--5152ae0ce974', 'dm-uuid-LVM-XRxvjDzFqVbn17VReU4qIhLjXYCqKEKsQ1ZrgnhslVr38nUkWh0biaFxPwKrlCvY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-27 01:03:01.797400 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:03:01.797419 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:03:01.797456 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:03:01.797550 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:03:01.797563 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:03:01.797583 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:03:01.797594 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:03:01.797606 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:03:01.797617 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:03:01.797652 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d07f98ad-3d62-49f5-84e9-af5adb521297', 'scsi-SQEMU_QEMU_HARDDISK_d07f98ad-3d62-49f5-84e9-af5adb521297'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d07f98ad-3d62-49f5-84e9-af5adb521297-part1', 'scsi-SQEMU_QEMU_HARDDISK_d07f98ad-3d62-49f5-84e9-af5adb521297-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d07f98ad-3d62-49f5-84e9-af5adb521297-part14', 'scsi-SQEMU_QEMU_HARDDISK_d07f98ad-3d62-49f5-84e9-af5adb521297-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d07f98ad-3d62-49f5-84e9-af5adb521297-part15', 'scsi-SQEMU_QEMU_HARDDISK_d07f98ad-3d62-49f5-84e9-af5adb521297-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d07f98ad-3d62-49f5-84e9-af5adb521297-part16', 'scsi-SQEMU_QEMU_HARDDISK_d07f98ad-3d62-49f5-84e9-af5adb521297-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-27 01:03:01.797666 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--aa250c28--8715--5ad9--8f6a--4b8a4568e8d3-osd--block--aa250c28--8715--5ad9--8f6a--4b8a4568e8d3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-J9NBHH-zew4-pOfs-CtH8-hySc-o7NP-XT8fa2', 'scsi-0QEMU_QEMU_HARDDISK_a71caac6-92e2-45f9-9373-56e68f91355d', 'scsi-SQEMU_QEMU_HARDDISK_a71caac6-92e2-45f9-9373-56e68f91355d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-27 01:03:01.797685 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--91c1f24e--fd77--555b--b1fb--5152ae0ce974-osd--block--91c1f24e--fd77--555b--b1fb--5152ae0ce974'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9GzcCV-eEi2-9iq6-7OwL-k0t4-avIt-rnCcC9', 'scsi-0QEMU_QEMU_HARDDISK_e3da6966-e430-4abd-922c-0deb6c0107da', 'scsi-SQEMU_QEMU_HARDDISK_e3da6966-e430-4abd-922c-0deb6c0107da'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-27 01:03:01.797697 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_94dd7bd0-cf74-4f65-8a31-220357cecc47', 'scsi-SQEMU_QEMU_HARDDISK_94dd7bd0-cf74-4f65-8a31-220357cecc47'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-27 01:03:01.797709 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5630d52f--55a8--52f3--8c7d--90d730eab2c2-osd--block--5630d52f--55a8--52f3--8c7d--90d730eab2c2', 'dm-uuid-LVM-E17jWAJP6Me7aqZ4Q8UClyfqzp0zu2zwBObKfGSwewlrOjqJGlCTZm1c7oSX94jh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-27 01:03:01.797720 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-27-00-03-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-27 01:03:01.797732 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:03:01.797756 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e90026b5--6780--5a31--9cea--c7916e7559fe-osd--block--e90026b5--6780--5a31--9cea--c7916e7559fe', 'dm-uuid-LVM-PnLQWj1f4ROpOubC0dQiJ0Udk3o62eo2PjpyV1d2N6Q39nuZoymfRyTDp9Nioxh6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-27 01:03:01.797768 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:03:01.797787 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:03:01.797808 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:03:01.797825 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:03:01.797843 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:03:01.797862 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:03:01.797882 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:03:01.797902 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-27 01:03:01.797937 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b66f543-9fce-4c0f-ad03-37f043f64686', 'scsi-SQEMU_QEMU_HARDDISK_7b66f543-9fce-4c0f-ad03-37f043f64686'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b66f543-9fce-4c0f-ad03-37f043f64686-part1', 'scsi-SQEMU_QEMU_HARDDISK_7b66f543-9fce-4c0f-ad03-37f043f64686-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b66f543-9fce-4c0f-ad03-37f043f64686-part14', 'scsi-SQEMU_QEMU_HARDDISK_7b66f543-9fce-4c0f-ad03-37f043f64686-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b66f543-9fce-4c0f-ad03-37f043f64686-part15', 'scsi-SQEMU_QEMU_HARDDISK_7b66f543-9fce-4c0f-ad03-37f043f64686-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b66f543-9fce-4c0f-ad03-37f043f64686-part16', 'scsi-SQEMU_QEMU_HARDDISK_7b66f543-9fce-4c0f-ad03-37f043f64686-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-27 01:03:01.797958 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--5630d52f--55a8--52f3--8c7d--90d730eab2c2-osd--block--5630d52f--55a8--52f3--8c7d--90d730eab2c2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-InaLzj-RS9L-jkkb-KINo-oXRf-l7yT-9o9jkD', 'scsi-0QEMU_QEMU_HARDDISK_7eee5dc0-08e1-454c-92c3-6b2c2994eeca', 'scsi-SQEMU_QEMU_HARDDISK_7eee5dc0-08e1-454c-92c3-6b2c2994eeca'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-27 01:03:01.797970 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--e90026b5--6780--5a31--9cea--c7916e7559fe-osd--block--e90026b5--6780--5a31--9cea--c7916e7559fe'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gZhBvh-1LFh-ekih-MIdg-M8Jo-TTgF-yb1n12', 'scsi-0QEMU_QEMU_HARDDISK_684e370a-eec5-4526-b882-46c5ae49497d', 'scsi-SQEMU_QEMU_HARDDISK_684e370a-eec5-4526-b882-46c5ae49497d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-27 01:03:01.797982 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_109976ce-0a0b-48dc-bf94-df447195f5f3', 'scsi-SQEMU_QEMU_HARDDISK_109976ce-0a0b-48dc-bf94-df447195f5f3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-27 01:03:01.798001 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-27-00-03-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-27 01:03:01.798072 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:03:01.798088 | orchestrator | 2026-02-27 01:03:01.798100 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-27 01:03:01.798111 | orchestrator | Friday 27 February 2026 01:01:03 +0000 (0:00:00.643) 0:00:18.453 ******* 2026-02-27 01:03:01.798124 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c5e6c545--43c0--5a5e--9b6e--24e5d5043e2a-osd--block--c5e6c545--43c0--5a5e--9b6e--24e5d5043e2a', 'dm-uuid-LVM-ktZNB2qrs3DaCnLkAdNHrqYVG23HKb1FGHO1W2U1zR2CbXChmoBj0ctfCoqUzjKf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.798138 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--15e091ae--77f4--5dd5--92b2--2aa74778b61d-osd--block--15e091ae--77f4--5dd5--92b2--2aa74778b61d', 'dm-uuid-LVM-qJU288vwWpkc3KXMmYUCJORUt3aDMziKdcrQEt5vLA8Hjbzwqjl8UH3NpNbOBh11'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.798149 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.798161 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.798215 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.798243 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.798264 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.798275 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--aa250c28--8715--5ad9--8f6a--4b8a4568e8d3-osd--block--aa250c28--8715--5ad9--8f6a--4b8a4568e8d3', 'dm-uuid-LVM-ZkL6ONrrTJ7thuRkFAXmCWJ98Giu8rzf6AyCY1QlpDnyMYhjrremnq2sgAaYdddg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.798286 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.798298 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.798309 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--91c1f24e--fd77--555b--b1fb--5152ae0ce974-osd--block--91c1f24e--fd77--555b--b1fb--5152ae0ce974', 'dm-uuid-LVM-XRxvjDzFqVbn17VReU4qIhLjXYCqKEKsQ1ZrgnhslVr38nUkWh0biaFxPwKrlCvY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.798338 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.798351 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.798362 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.798373 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.798398 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3470a12e-124f-400f-8df7-ef48fe544e4b', 'scsi-SQEMU_QEMU_HARDDISK_3470a12e-124f-400f-8df7-ef48fe544e4b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3470a12e-124f-400f-8df7-ef48fe544e4b-part1', 'scsi-SQEMU_QEMU_HARDDISK_3470a12e-124f-400f-8df7-ef48fe544e4b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3470a12e-124f-400f-8df7-ef48fe544e4b-part14', 'scsi-SQEMU_QEMU_HARDDISK_3470a12e-124f-400f-8df7-ef48fe544e4b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3470a12e-124f-400f-8df7-ef48fe544e4b-part15', 'scsi-SQEMU_QEMU_HARDDISK_3470a12e-124f-400f-8df7-ef48fe544e4b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3470a12e-124f-400f-8df7-ef48fe544e4b-part16', 'scsi-SQEMU_QEMU_HARDDISK_3470a12e-124f-400f-8df7-ef48fe544e4b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.798419 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c5e6c545--43c0--5a5e--9b6e--24e5d5043e2a-osd--block--c5e6c545--43c0--5a5e--9b6e--24e5d5043e2a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6xG180-8oDB-fzAy-pAEY-lUOZ-L30t-ssoe3i', 'scsi-0QEMU_QEMU_HARDDISK_c4916fb9-2e52-4262-9b09-55f9a233c222', 'scsi-SQEMU_QEMU_HARDDISK_c4916fb9-2e52-4262-9b09-55f9a233c222'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.798431 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.798443 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5630d52f--55a8--52f3--8c7d--90d730eab2c2-osd--block--5630d52f--55a8--52f3--8c7d--90d730eab2c2', 'dm-uuid-LVM-E17jWAJP6Me7aqZ4Q8UClyfqzp0zu2zwBObKfGSwewlrOjqJGlCTZm1c7oSX94jh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.798454 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--15e091ae--77f4--5dd5--92b2--2aa74778b61d-osd--block--15e091ae--77f4--5dd5--92b2--2aa74778b61d'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-wX9ua3-ujTP-p7s8-wxQz-my6v-aSdV-BlVN7a', 'scsi-0QEMU_QEMU_HARDDISK_31dfd5e5-18cf-471e-b1c7-8ca54ae9145c', 'scsi-SQEMU_QEMU_HARDDISK_31dfd5e5-18cf-471e-b1c7-8ca54ae9145c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.798527 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--e90026b5--6780--5a31--9cea--c7916e7559fe-osd--block--e90026b5--6780--5a31--9cea--c7916e7559fe', 'dm-uuid-LVM-PnLQWj1f4ROpOubC0dQiJ0Udk3o62eo2PjpyV1d2N6Q39nuZoymfRyTDp9Nioxh6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.798550 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7c486bab-939d-4b28-a8a9-5aea680a535b', 'scsi-SQEMU_QEMU_HARDDISK_7c486bab-939d-4b28-a8a9-5aea680a535b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.798569 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.798588 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.798607 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-27-00-03-18-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.798629 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.798648 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:03:01.798672 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.798684 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.798696 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.798707 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.798719 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.798730 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.798748 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.798769 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'fal2026-02-27 01:03:01 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:03:01.798895 | orchestrator | se_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.798925 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d07f98ad-3d62-49f5-84e9-af5adb521297', 'scsi-SQEMU_QEMU_HARDDISK_d07f98ad-3d62-49f5-84e9-af5adb521297'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d07f98ad-3d62-49f5-84e9-af5adb521297-part1', 'scsi-SQEMU_QEMU_HARDDISK_d07f98ad-3d62-49f5-84e9-af5adb521297-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d07f98ad-3d62-49f5-84e9-af5adb521297-part14', 'scsi-SQEMU_QEMU_HARDDISK_d07f98ad-3d62-49f5-84e9-af5adb521297-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d07f98ad-3d62-49f5-84e9-af5adb521297-part15', 'scsi-SQEMU_QEMU_HARDDISK_d07f98ad-3d62-49f5-84e9-af5adb521297-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d07f98ad-3d62-49f5-84e9-af5adb521297-part16', 'scsi-SQEMU_QEMU_HARDDISK_d07f98ad-3d62-49f5-84e9-af5adb521297-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.798947 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.798998 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--aa250c28--8715--5ad9--8f6a--4b8a4568e8d3-osd--block--aa250c28--8715--5ad9--8f6a--4b8a4568e8d3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-J9NBHH-zew4-pOfs-CtH8-hySc-o7NP-XT8fa2', 'scsi-0QEMU_QEMU_HARDDISK_a71caac6-92e2-45f9-9373-56e68f91355d', 'scsi-SQEMU_QEMU_HARDDISK_a71caac6-92e2-45f9-9373-56e68f91355d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.799023 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b66f543-9fce-4c0f-ad03-37f043f64686', 'scsi-SQEMU_QEMU_HARDDISK_7b66f543-9fce-4c0f-ad03-37f043f64686'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b66f543-9fce-4c0f-ad03-37f043f64686-part1', 'scsi-SQEMU_QEMU_HARDDISK_7b66f543-9fce-4c0f-ad03-37f043f64686-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b66f543-9fce-4c0f-ad03-37f043f64686-part14', 'scsi-SQEMU_QEMU_HARDDISK_7b66f543-9fce-4c0f-ad03-37f043f64686-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b66f543-9fce-4c0f-ad03-37f043f64686-part15', 'scsi-SQEMU_QEMU_HARDDISK_7b66f543-9fce-4c0f-ad03-37f043f64686-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b66f543-9fce-4c0f-ad03-37f043f64686-part16', 'scsi-SQEMU_QEMU_HARDDISK_7b66f543-9fce-4c0f-ad03-37f043f64686-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.799040 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--91c1f24e--fd77--555b--b1fb--5152ae0ce974-osd--block--91c1f24e--fd77--555b--b1fb--5152ae0ce974'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-9GzcCV-eEi2-9iq6-7OwL-k0t4-avIt-rnCcC9', 'scsi-0QEMU_QEMU_HARDDISK_e3da6966-e430-4abd-922c-0deb6c0107da', 'scsi-SQEMU_QEMU_HARDDISK_e3da6966-e430-4abd-922c-0deb6c0107da'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.799070 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_94dd7bd0-cf74-4f65-8a31-220357cecc47', 'scsi-SQEMU_QEMU_HARDDISK_94dd7bd0-cf74-4f65-8a31-220357cecc47'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.799083 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--5630d52f--55a8--52f3--8c7d--90d730eab2c2-osd--block--5630d52f--55a8--52f3--8c7d--90d730eab2c2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-InaLzj-RS9L-jkkb-KINo-oXRf-l7yT-9o9jkD', 'scsi-0QEMU_QEMU_HARDDISK_7eee5dc0-08e1-454c-92c3-6b2c2994eeca', 'scsi-SQEMU_QEMU_HARDDISK_7eee5dc0-08e1-454c-92c3-6b2c2994eeca'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.799095 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-27-00-03-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.799106 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:03:01.799118 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--e90026b5--6780--5a31--9cea--c7916e7559fe-osd--block--e90026b5--6780--5a31--9cea--c7916e7559fe'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gZhBvh-1LFh-ekih-MIdg-M8Jo-TTgF-yb1n12', 'scsi-0QEMU_QEMU_HARDDISK_684e370a-eec5-4526-b882-46c5ae49497d', 'scsi-SQEMU_QEMU_HARDDISK_684e370a-eec5-4526-b882-46c5ae49497d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.799136 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_109976ce-0a0b-48dc-bf94-df447195f5f3', 'scsi-SQEMU_QEMU_HARDDISK_109976ce-0a0b-48dc-bf94-df447195f5f3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.799160 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-27-00-03-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-27 01:03:01.799172 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:03:01.799183 | orchestrator | 2026-02-27 01:03:01.799195 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-27 01:03:01.799206 | orchestrator | Friday 27 February 2026 01:01:04 +0000 (0:00:00.649) 0:00:19.102 ******* 2026-02-27 01:03:01.799218 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:03:01.799229 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:03:01.799240 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:03:01.799251 | orchestrator | 2026-02-27 01:03:01.799262 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-27 01:03:01.799273 | orchestrator | Friday 27 February 2026 01:01:05 +0000 (0:00:00.726) 0:00:19.829 ******* 2026-02-27 01:03:01.799284 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:03:01.799295 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:03:01.799306 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:03:01.799317 | orchestrator | 2026-02-27 01:03:01.799328 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-27 01:03:01.799339 | orchestrator | Friday 27 February 2026 01:01:05 +0000 (0:00:00.588) 0:00:20.417 ******* 2026-02-27 01:03:01.799350 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:03:01.799361 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:03:01.799372 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:03:01.799382 | orchestrator | 2026-02-27 01:03:01.799394 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-27 01:03:01.799405 | orchestrator | Friday 27 February 2026 01:01:06 +0000 (0:00:00.727) 0:00:21.145 ******* 2026-02-27 01:03:01.799415 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:03:01.799427 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:03:01.799437 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:03:01.799448 | orchestrator | 2026-02-27 01:03:01.799459 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-27 01:03:01.799506 | orchestrator | Friday 27 February 2026 01:01:06 +0000 (0:00:00.352) 0:00:21.498 ******* 2026-02-27 01:03:01.799524 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:03:01.799542 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:03:01.799570 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:03:01.799588 | orchestrator | 2026-02-27 01:03:01.799605 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-27 01:03:01.799624 | orchestrator | Friday 27 February 2026 01:01:07 +0000 (0:00:00.539) 0:00:22.038 ******* 2026-02-27 01:03:01.799643 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:03:01.799662 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:03:01.799682 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:03:01.799694 | orchestrator | 2026-02-27 01:03:01.799705 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-27 01:03:01.799716 | orchestrator | Friday 27 February 2026 01:01:08 +0000 (0:00:00.572) 0:00:22.610 ******* 2026-02-27 01:03:01.799727 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-27 01:03:01.799737 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-27 01:03:01.799748 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-27 01:03:01.799759 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-27 01:03:01.799770 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-27 01:03:01.799781 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-27 01:03:01.799792 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-27 01:03:01.799802 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-27 01:03:01.799813 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-27 01:03:01.799826 | orchestrator | 2026-02-27 01:03:01.799845 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-27 01:03:01.799863 | orchestrator | Friday 27 February 2026 01:01:08 +0000 (0:00:00.850) 0:00:23.461 ******* 2026-02-27 01:03:01.799880 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-27 01:03:01.799898 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-27 01:03:01.799918 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-27 01:03:01.799939 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:03:01.799951 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-27 01:03:01.799962 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-27 01:03:01.799973 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-27 01:03:01.799983 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:03:01.799994 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-27 01:03:01.800005 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-27 01:03:01.800016 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-27 01:03:01.800027 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:03:01.800038 | orchestrator | 2026-02-27 01:03:01.800049 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-27 01:03:01.800060 | orchestrator | Friday 27 February 2026 01:01:09 +0000 (0:00:00.385) 0:00:23.847 ******* 2026-02-27 01:03:01.800071 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-27 01:03:01.800082 | orchestrator | 2026-02-27 01:03:01.800100 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-27 01:03:01.800112 | orchestrator | Friday 27 February 2026 01:01:10 +0000 (0:00:00.822) 0:00:24.669 ******* 2026-02-27 01:03:01.800133 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:03:01.800144 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:03:01.800155 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:03:01.800166 | orchestrator | 2026-02-27 01:03:01.800177 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-27 01:03:01.800187 | orchestrator | Friday 27 February 2026 01:01:10 +0000 (0:00:00.341) 0:00:25.011 ******* 2026-02-27 01:03:01.800198 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:03:01.800209 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:03:01.800228 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:03:01.800239 | orchestrator | 2026-02-27 01:03:01.800250 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-27 01:03:01.800261 | orchestrator | Friday 27 February 2026 01:01:10 +0000 (0:00:00.360) 0:00:25.371 ******* 2026-02-27 01:03:01.800272 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:03:01.800282 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:03:01.800293 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:03:01.800304 | orchestrator | 2026-02-27 01:03:01.800314 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-27 01:03:01.800326 | orchestrator | Friday 27 February 2026 01:01:11 +0000 (0:00:00.354) 0:00:25.726 ******* 2026-02-27 01:03:01.800337 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:03:01.800348 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:03:01.800359 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:03:01.800369 | orchestrator | 2026-02-27 01:03:01.800380 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-27 01:03:01.800391 | orchestrator | Friday 27 February 2026 01:01:12 +0000 (0:00:00.933) 0:00:26.659 ******* 2026-02-27 01:03:01.800402 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-27 01:03:01.800414 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-27 01:03:01.800424 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-27 01:03:01.800435 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:03:01.800446 | orchestrator | 2026-02-27 01:03:01.800457 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-27 01:03:01.800501 | orchestrator | Friday 27 February 2026 01:01:12 +0000 (0:00:00.437) 0:00:27.096 ******* 2026-02-27 01:03:01.800522 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-27 01:03:01.800542 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-27 01:03:01.800554 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-27 01:03:01.800565 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:03:01.800576 | orchestrator | 2026-02-27 01:03:01.800586 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-27 01:03:01.800598 | orchestrator | Friday 27 February 2026 01:01:12 +0000 (0:00:00.392) 0:00:27.488 ******* 2026-02-27 01:03:01.800609 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-27 01:03:01.800620 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-27 01:03:01.800630 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-27 01:03:01.800641 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:03:01.800652 | orchestrator | 2026-02-27 01:03:01.800668 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-27 01:03:01.800686 | orchestrator | Friday 27 February 2026 01:01:13 +0000 (0:00:00.499) 0:00:27.988 ******* 2026-02-27 01:03:01.800704 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:03:01.800723 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:03:01.800741 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:03:01.800760 | orchestrator | 2026-02-27 01:03:01.800778 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-27 01:03:01.800797 | orchestrator | Friday 27 February 2026 01:01:13 +0000 (0:00:00.358) 0:00:28.347 ******* 2026-02-27 01:03:01.800816 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-27 01:03:01.800834 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-27 01:03:01.800854 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-27 01:03:01.800873 | orchestrator | 2026-02-27 01:03:01.800885 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-27 01:03:01.800896 | orchestrator | Friday 27 February 2026 01:01:14 +0000 (0:00:00.567) 0:00:28.914 ******* 2026-02-27 01:03:01.800907 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-27 01:03:01.800918 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-27 01:03:01.800939 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-27 01:03:01.800950 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-27 01:03:01.800961 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-27 01:03:01.800972 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-27 01:03:01.800987 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-27 01:03:01.801006 | orchestrator | 2026-02-27 01:03:01.801024 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-27 01:03:01.801041 | orchestrator | Friday 27 February 2026 01:01:15 +0000 (0:00:01.205) 0:00:30.120 ******* 2026-02-27 01:03:01.801058 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-27 01:03:01.801075 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-27 01:03:01.801093 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-27 01:03:01.801111 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-27 01:03:01.801146 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-27 01:03:01.801164 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-27 01:03:01.801197 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-27 01:03:01.801218 | orchestrator | 2026-02-27 01:03:01.801230 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-02-27 01:03:01.801241 | orchestrator | Friday 27 February 2026 01:01:17 +0000 (0:00:02.152) 0:00:32.272 ******* 2026-02-27 01:03:01.801252 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:03:01.801263 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:03:01.801274 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-02-27 01:03:01.801284 | orchestrator | 2026-02-27 01:03:01.801296 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-02-27 01:03:01.801306 | orchestrator | Friday 27 February 2026 01:01:18 +0000 (0:00:00.404) 0:00:32.677 ******* 2026-02-27 01:03:01.801322 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-27 01:03:01.801343 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-27 01:03:01.801364 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-27 01:03:01.801377 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-27 01:03:01.801388 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-27 01:03:01.801399 | orchestrator | 2026-02-27 01:03:01.801419 | orchestrator | TASK [generate keys] *********************************************************** 2026-02-27 01:03:01.801430 | orchestrator | Friday 27 February 2026 01:02:03 +0000 (0:00:45.760) 0:01:18.438 ******* 2026-02-27 01:03:01.801441 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-27 01:03:01.801451 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-27 01:03:01.801462 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-27 01:03:01.801579 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-27 01:03:01.801590 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-27 01:03:01.801600 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-27 01:03:01.801611 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-02-27 01:03:01.801620 | orchestrator | 2026-02-27 01:03:01.801630 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-02-27 01:03:01.801640 | orchestrator | Friday 27 February 2026 01:02:28 +0000 (0:00:24.893) 0:01:43.331 ******* 2026-02-27 01:03:01.801650 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-27 01:03:01.801659 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-27 01:03:01.801669 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-27 01:03:01.801679 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-27 01:03:01.801688 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-27 01:03:01.801698 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-27 01:03:01.801708 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-27 01:03:01.801718 | orchestrator | 2026-02-27 01:03:01.801727 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-02-27 01:03:01.801737 | orchestrator | Friday 27 February 2026 01:02:41 +0000 (0:00:13.165) 0:01:56.497 ******* 2026-02-27 01:03:01.801747 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-27 01:03:01.801756 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-27 01:03:01.801766 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-27 01:03:01.801781 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-27 01:03:01.801791 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-27 01:03:01.801808 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-27 01:03:01.801818 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-27 01:03:01.801828 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-27 01:03:01.801837 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-27 01:03:01.801847 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-27 01:03:01.801856 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-27 01:03:01.801866 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-27 01:03:01.801875 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-27 01:03:01.801885 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-27 01:03:01.801895 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-27 01:03:01.801904 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-27 01:03:01.801921 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-27 01:03:01.801931 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-27 01:03:01.801940 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-02-27 01:03:01.801950 | orchestrator | 2026-02-27 01:03:01.801960 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 01:03:01.801969 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-02-27 01:03:01.801982 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-27 01:03:01.801992 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-27 01:03:01.802001 | orchestrator | 2026-02-27 01:03:01.802011 | orchestrator | 2026-02-27 01:03:01.802074 | orchestrator | 2026-02-27 01:03:01.802084 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 01:03:01.802094 | orchestrator | Friday 27 February 2026 01:03:00 +0000 (0:00:18.946) 0:02:15.443 ******* 2026-02-27 01:03:01.802104 | orchestrator | =============================================================================== 2026-02-27 01:03:01.802114 | orchestrator | create openstack pool(s) ----------------------------------------------- 45.76s 2026-02-27 01:03:01.802123 | orchestrator | generate keys ---------------------------------------------------------- 24.89s 2026-02-27 01:03:01.802133 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.95s 2026-02-27 01:03:01.802142 | orchestrator | get keys from monitors ------------------------------------------------- 13.17s 2026-02-27 01:03:01.802152 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.35s 2026-02-27 01:03:01.802161 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.15s 2026-02-27 01:03:01.802171 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.91s 2026-02-27 01:03:01.802180 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.21s 2026-02-27 01:03:01.802190 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.95s 2026-02-27 01:03:01.802200 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.93s 2026-02-27 01:03:01.802209 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.86s 2026-02-27 01:03:01.802219 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.85s 2026-02-27 01:03:01.802228 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.82s 2026-02-27 01:03:01.802238 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.73s 2026-02-27 01:03:01.802247 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.73s 2026-02-27 01:03:01.802257 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.73s 2026-02-27 01:03:01.802268 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.72s 2026-02-27 01:03:01.802285 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.71s 2026-02-27 01:03:01.802303 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.65s 2026-02-27 01:03:01.802321 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.64s 2026-02-27 01:03:04.844569 | orchestrator | 2026-02-27 01:03:04 | INFO  | Task 9c131f7a-a4ec-48f0-9dfa-b84b3325e63d is in state STARTED 2026-02-27 01:03:04.846115 | orchestrator | 2026-02-27 01:03:04 | INFO  | Task 6595c2dc-cd7e-4585-ba30-13712dc7b670 is in state STARTED 2026-02-27 01:03:04.849820 | orchestrator | 2026-02-27 01:03:04 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:03:04.850174 | orchestrator | 2026-02-27 01:03:04 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:03:07.905704 | orchestrator | 2026-02-27 01:03:07 | INFO  | Task 9c131f7a-a4ec-48f0-9dfa-b84b3325e63d is in state STARTED 2026-02-27 01:03:07.907147 | orchestrator | 2026-02-27 01:03:07 | INFO  | Task 6595c2dc-cd7e-4585-ba30-13712dc7b670 is in state STARTED 2026-02-27 01:03:07.911896 | orchestrator | 2026-02-27 01:03:07 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:03:07.912242 | orchestrator | 2026-02-27 01:03:07 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:03:10.970713 | orchestrator | 2026-02-27 01:03:10 | INFO  | Task 9c131f7a-a4ec-48f0-9dfa-b84b3325e63d is in state STARTED 2026-02-27 01:03:10.970811 | orchestrator | 2026-02-27 01:03:10 | INFO  | Task 6595c2dc-cd7e-4585-ba30-13712dc7b670 is in state STARTED 2026-02-27 01:03:10.972407 | orchestrator | 2026-02-27 01:03:10 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:03:10.972444 | orchestrator | 2026-02-27 01:03:10 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:03:14.017568 | orchestrator | 2026-02-27 01:03:14 | INFO  | Task 9c131f7a-a4ec-48f0-9dfa-b84b3325e63d is in state STARTED 2026-02-27 01:03:14.019327 | orchestrator | 2026-02-27 01:03:14 | INFO  | Task 6595c2dc-cd7e-4585-ba30-13712dc7b670 is in state STARTED 2026-02-27 01:03:14.021064 | orchestrator | 2026-02-27 01:03:14 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:03:14.021441 | orchestrator | 2026-02-27 01:03:14 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:03:17.064272 | orchestrator | 2026-02-27 01:03:17 | INFO  | Task 9c131f7a-a4ec-48f0-9dfa-b84b3325e63d is in state STARTED 2026-02-27 01:03:17.065630 | orchestrator | 2026-02-27 01:03:17 | INFO  | Task 6595c2dc-cd7e-4585-ba30-13712dc7b670 is in state STARTED 2026-02-27 01:03:17.066956 | orchestrator | 2026-02-27 01:03:17 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:03:17.067176 | orchestrator | 2026-02-27 01:03:17 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:03:20.115619 | orchestrator | 2026-02-27 01:03:20 | INFO  | Task 9c131f7a-a4ec-48f0-9dfa-b84b3325e63d is in state STARTED 2026-02-27 01:03:20.116518 | orchestrator | 2026-02-27 01:03:20 | INFO  | Task 6595c2dc-cd7e-4585-ba30-13712dc7b670 is in state STARTED 2026-02-27 01:03:20.118576 | orchestrator | 2026-02-27 01:03:20 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:03:20.118637 | orchestrator | 2026-02-27 01:03:20 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:03:23.164915 | orchestrator | 2026-02-27 01:03:23 | INFO  | Task 9c131f7a-a4ec-48f0-9dfa-b84b3325e63d is in state STARTED 2026-02-27 01:03:23.170823 | orchestrator | 2026-02-27 01:03:23.171273 | orchestrator | 2026-02-27 01:03:23.171296 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-27 01:03:23.171341 | orchestrator | 2026-02-27 01:03:23.171357 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-27 01:03:23.171369 | orchestrator | Friday 27 February 2026 01:01:35 +0000 (0:00:00.277) 0:00:00.277 ******* 2026-02-27 01:03:23.171384 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:03:23.171399 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:03:23.171412 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:03:23.171458 | orchestrator | 2026-02-27 01:03:23.171471 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-27 01:03:23.171484 | orchestrator | Friday 27 February 2026 01:01:35 +0000 (0:00:00.312) 0:00:00.589 ******* 2026-02-27 01:03:23.171497 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-02-27 01:03:23.171534 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-02-27 01:03:23.171573 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-02-27 01:03:23.171587 | orchestrator | 2026-02-27 01:03:23.171601 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-02-27 01:03:23.171614 | orchestrator | 2026-02-27 01:03:23.171627 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-27 01:03:23.171641 | orchestrator | Friday 27 February 2026 01:01:35 +0000 (0:00:00.444) 0:00:01.034 ******* 2026-02-27 01:03:23.171654 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:03:23.171667 | orchestrator | 2026-02-27 01:03:23.171679 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-02-27 01:03:23.171692 | orchestrator | Friday 27 February 2026 01:01:36 +0000 (0:00:00.550) 0:00:01.585 ******* 2026-02-27 01:03:23.171723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-27 01:03:23.171757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-27 01:03:23.171789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-27 01:03:23.171803 | orchestrator | 2026-02-27 01:03:23.171816 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-02-27 01:03:23.171868 | orchestrator | Friday 27 February 2026 01:01:37 +0000 (0:00:01.299) 0:00:02.884 ******* 2026-02-27 01:03:23.171886 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:03:23.171898 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:03:23.171911 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:03:23.171924 | orchestrator | 2026-02-27 01:03:23.171967 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-27 01:03:23.171981 | orchestrator | Friday 27 February 2026 01:01:38 +0000 (0:00:00.493) 0:00:03.377 ******* 2026-02-27 01:03:23.172002 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-27 01:03:23.172023 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-27 01:03:23.172035 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-02-27 01:03:23.172050 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-02-27 01:03:23.172063 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-02-27 01:03:23.172076 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-02-27 01:03:23.172087 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-02-27 01:03:23.172102 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-02-27 01:03:23.172115 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-27 01:03:23.172128 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-27 01:03:23.172138 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-02-27 01:03:23.172150 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-02-27 01:03:23.172164 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-02-27 01:03:23.172176 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-02-27 01:03:23.172189 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-02-27 01:03:23.172200 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-02-27 01:03:23.172212 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-27 01:03:23.172224 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-27 01:03:23.172238 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-02-27 01:03:23.172255 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-02-27 01:03:23.172268 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-02-27 01:03:23.172280 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-02-27 01:03:23.172293 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-02-27 01:03:23.172304 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-02-27 01:03:23.172316 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-02-27 01:03:23.172328 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-02-27 01:03:23.172340 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-02-27 01:03:23.172391 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-02-27 01:03:23.172404 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-02-27 01:03:23.172415 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-02-27 01:03:23.172426 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-02-27 01:03:23.172446 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-02-27 01:03:23.172458 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-02-27 01:03:23.172470 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-02-27 01:03:23.172535 | orchestrator | 2026-02-27 01:03:23.172548 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-27 01:03:23.172559 | orchestrator | Friday 27 February 2026 01:01:39 +0000 (0:00:00.802) 0:00:04.180 ******* 2026-02-27 01:03:23.172570 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:03:23.172582 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:03:23.172593 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:03:23.172604 | orchestrator | 2026-02-27 01:03:23.172615 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-27 01:03:23.172626 | orchestrator | Friday 27 February 2026 01:01:39 +0000 (0:00:00.329) 0:00:04.510 ******* 2026-02-27 01:03:23.172645 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:03:23.172657 | orchestrator | 2026-02-27 01:03:23.172667 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-27 01:03:23.172678 | orchestrator | Friday 27 February 2026 01:01:39 +0000 (0:00:00.136) 0:00:04.646 ******* 2026-02-27 01:03:23.172690 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:03:23.172700 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:03:23.172710 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:03:23.172720 | orchestrator | 2026-02-27 01:03:23.172730 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-27 01:03:23.172742 | orchestrator | Friday 27 February 2026 01:01:40 +0000 (0:00:00.495) 0:00:05.142 ******* 2026-02-27 01:03:23.172753 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:03:23.172762 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:03:23.172772 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:03:23.172782 | orchestrator | 2026-02-27 01:03:23.172793 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-27 01:03:23.172803 | orchestrator | Friday 27 February 2026 01:01:40 +0000 (0:00:00.323) 0:00:05.465 ******* 2026-02-27 01:03:23.172812 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:03:23.172823 | orchestrator | 2026-02-27 01:03:23.172833 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-27 01:03:23.172844 | orchestrator | Friday 27 February 2026 01:01:40 +0000 (0:00:00.160) 0:00:05.626 ******* 2026-02-27 01:03:23.172855 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:03:23.172865 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:03:23.172875 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:03:23.172886 | orchestrator | 2026-02-27 01:03:23.172895 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-27 01:03:23.172905 | orchestrator | Friday 27 February 2026 01:01:40 +0000 (0:00:00.315) 0:00:05.942 ******* 2026-02-27 01:03:23.172915 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:03:23.172926 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:03:23.172936 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:03:23.172946 | orchestrator | 2026-02-27 01:03:23.172957 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-27 01:03:23.172966 | orchestrator | Friday 27 February 2026 01:01:41 +0000 (0:00:00.374) 0:00:06.316 ******* 2026-02-27 01:03:23.172977 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:03:23.172988 | orchestrator | 2026-02-27 01:03:23.172999 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-27 01:03:23.173015 | orchestrator | Friday 27 February 2026 01:01:41 +0000 (0:00:00.360) 0:00:06.677 ******* 2026-02-27 01:03:23.173026 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:03:23.173036 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:03:23.173055 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:03:23.173066 | orchestrator | 2026-02-27 01:03:23.173077 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-27 01:03:23.173088 | orchestrator | Friday 27 February 2026 01:01:41 +0000 (0:00:00.324) 0:00:07.002 ******* 2026-02-27 01:03:23.173098 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:03:23.173107 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:03:23.173118 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:03:23.173128 | orchestrator | 2026-02-27 01:03:23.173139 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-27 01:03:23.173149 | orchestrator | Friday 27 February 2026 01:01:42 +0000 (0:00:00.336) 0:00:07.338 ******* 2026-02-27 01:03:23.173159 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:03:23.173169 | orchestrator | 2026-02-27 01:03:23.173180 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-27 01:03:23.173193 | orchestrator | Friday 27 February 2026 01:01:42 +0000 (0:00:00.153) 0:00:07.491 ******* 2026-02-27 01:03:23.173202 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:03:23.173213 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:03:23.173224 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:03:23.173235 | orchestrator | 2026-02-27 01:03:23.173245 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-27 01:03:23.173255 | orchestrator | Friday 27 February 2026 01:01:42 +0000 (0:00:00.297) 0:00:07.788 ******* 2026-02-27 01:03:23.173265 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:03:23.173276 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:03:23.173286 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:03:23.173299 | orchestrator | 2026-02-27 01:03:23.173311 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-27 01:03:23.173322 | orchestrator | Friday 27 February 2026 01:01:43 +0000 (0:00:00.505) 0:00:08.294 ******* 2026-02-27 01:03:23.173447 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:03:23.173463 | orchestrator | 2026-02-27 01:03:23.173474 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-27 01:03:23.173486 | orchestrator | Friday 27 February 2026 01:01:43 +0000 (0:00:00.170) 0:00:08.465 ******* 2026-02-27 01:03:23.173497 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:03:23.173591 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:03:23.173605 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:03:23.173619 | orchestrator | 2026-02-27 01:03:23.173630 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-27 01:03:23.173642 | orchestrator | Friday 27 February 2026 01:01:43 +0000 (0:00:00.337) 0:00:08.803 ******* 2026-02-27 01:03:23.173652 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:03:23.173664 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:03:23.173676 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:03:23.173687 | orchestrator | 2026-02-27 01:03:23.173699 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-27 01:03:23.173709 | orchestrator | Friday 27 February 2026 01:01:44 +0000 (0:00:00.378) 0:00:09.181 ******* 2026-02-27 01:03:23.173720 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:03:23.173731 | orchestrator | 2026-02-27 01:03:23.173742 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-27 01:03:23.173753 | orchestrator | Friday 27 February 2026 01:01:44 +0000 (0:00:00.150) 0:00:09.332 ******* 2026-02-27 01:03:23.173765 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:03:23.173775 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:03:23.173822 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:03:23.173835 | orchestrator | 2026-02-27 01:03:23.173846 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-27 01:03:23.173867 | orchestrator | Friday 27 February 2026 01:01:44 +0000 (0:00:00.312) 0:00:09.645 ******* 2026-02-27 01:03:23.173878 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:03:23.173917 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:03:23.173931 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:03:23.173953 | orchestrator | 2026-02-27 01:03:23.173991 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-27 01:03:23.174005 | orchestrator | Friday 27 February 2026 01:01:45 +0000 (0:00:00.573) 0:00:10.219 ******* 2026-02-27 01:03:23.174055 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:03:23.174071 | orchestrator | 2026-02-27 01:03:23.174082 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-27 01:03:23.174093 | orchestrator | Friday 27 February 2026 01:01:45 +0000 (0:00:00.152) 0:00:10.371 ******* 2026-02-27 01:03:23.174105 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:03:23.174116 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:03:23.174128 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:03:23.174137 | orchestrator | 2026-02-27 01:03:23.174148 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-27 01:03:23.174159 | orchestrator | Friday 27 February 2026 01:01:45 +0000 (0:00:00.360) 0:00:10.731 ******* 2026-02-27 01:03:23.174169 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:03:23.174180 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:03:23.174190 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:03:23.174199 | orchestrator | 2026-02-27 01:03:23.174209 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-27 01:03:23.174219 | orchestrator | Friday 27 February 2026 01:01:46 +0000 (0:00:00.348) 0:00:11.080 ******* 2026-02-27 01:03:23.174230 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:03:23.174241 | orchestrator | 2026-02-27 01:03:23.174251 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-27 01:03:23.174263 | orchestrator | Friday 27 February 2026 01:01:46 +0000 (0:00:00.140) 0:00:11.221 ******* 2026-02-27 01:03:23.174274 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:03:23.174284 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:03:23.174294 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:03:23.174306 | orchestrator | 2026-02-27 01:03:23.174317 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-27 01:03:23.174328 | orchestrator | Friday 27 February 2026 01:01:46 +0000 (0:00:00.497) 0:00:11.718 ******* 2026-02-27 01:03:23.174338 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:03:23.174349 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:03:23.174359 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:03:23.174369 | orchestrator | 2026-02-27 01:03:23.174388 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-27 01:03:23.174401 | orchestrator | Friday 27 February 2026 01:01:46 +0000 (0:00:00.340) 0:00:12.059 ******* 2026-02-27 01:03:23.174411 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:03:23.174422 | orchestrator | 2026-02-27 01:03:23.174433 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-27 01:03:23.174445 | orchestrator | Friday 27 February 2026 01:01:47 +0000 (0:00:00.193) 0:00:12.253 ******* 2026-02-27 01:03:23.174456 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:03:23.174468 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:03:23.174478 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:03:23.174489 | orchestrator | 2026-02-27 01:03:23.174557 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-27 01:03:23.174576 | orchestrator | Friday 27 February 2026 01:01:47 +0000 (0:00:00.342) 0:00:12.595 ******* 2026-02-27 01:03:23.174587 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:03:23.174597 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:03:23.174608 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:03:23.174618 | orchestrator | 2026-02-27 01:03:23.174628 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-27 01:03:23.174639 | orchestrator | Friday 27 February 2026 01:01:47 +0000 (0:00:00.363) 0:00:12.959 ******* 2026-02-27 01:03:23.174651 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:03:23.174661 | orchestrator | 2026-02-27 01:03:23.174672 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-27 01:03:23.174694 | orchestrator | Friday 27 February 2026 01:01:48 +0000 (0:00:00.154) 0:00:13.113 ******* 2026-02-27 01:03:23.174706 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:03:23.174716 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:03:23.174727 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:03:23.174739 | orchestrator | 2026-02-27 01:03:23.174750 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-02-27 01:03:23.174760 | orchestrator | Friday 27 February 2026 01:01:48 +0000 (0:00:00.569) 0:00:13.683 ******* 2026-02-27 01:03:23.174771 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:03:23.174782 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:03:23.174794 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:03:23.174804 | orchestrator | 2026-02-27 01:03:23.174815 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-02-27 01:03:23.174858 | orchestrator | Friday 27 February 2026 01:01:50 +0000 (0:00:01.759) 0:00:15.442 ******* 2026-02-27 01:03:23.174869 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-27 01:03:23.174880 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-27 01:03:23.174890 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-27 01:03:23.174902 | orchestrator | 2026-02-27 01:03:23.174912 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-02-27 01:03:23.174923 | orchestrator | Friday 27 February 2026 01:01:52 +0000 (0:00:02.065) 0:00:17.507 ******* 2026-02-27 01:03:23.174935 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-27 01:03:23.174974 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-27 01:03:23.174987 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-27 01:03:23.174999 | orchestrator | 2026-02-27 01:03:23.175021 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-02-27 01:03:23.175032 | orchestrator | Friday 27 February 2026 01:01:55 +0000 (0:00:02.617) 0:00:20.125 ******* 2026-02-27 01:03:23.175044 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-27 01:03:23.175055 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-27 01:03:23.175066 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-27 01:03:23.175076 | orchestrator | 2026-02-27 01:03:23.175087 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-02-27 01:03:23.175099 | orchestrator | Friday 27 February 2026 01:01:57 +0000 (0:00:02.198) 0:00:22.323 ******* 2026-02-27 01:03:23.175111 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:03:23.175121 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:03:23.175132 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:03:23.175144 | orchestrator | 2026-02-27 01:03:23.175156 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-02-27 01:03:23.175165 | orchestrator | Friday 27 February 2026 01:01:57 +0000 (0:00:00.323) 0:00:22.647 ******* 2026-02-27 01:03:23.175176 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:03:23.175187 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:03:23.175199 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:03:23.175211 | orchestrator | 2026-02-27 01:03:23.175222 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-27 01:03:23.175232 | orchestrator | Friday 27 February 2026 01:01:57 +0000 (0:00:00.319) 0:00:22.967 ******* 2026-02-27 01:03:23.175243 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:03:23.175254 | orchestrator | 2026-02-27 01:03:23.175265 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-02-27 01:03:23.175290 | orchestrator | Friday 27 February 2026 01:01:58 +0000 (0:00:00.926) 0:00:23.893 ******* 2026-02-27 01:03:23.175310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-27 01:03:23.175340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-27 01:03:23.175361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-27 01:03:23.175373 | orchestrator | 2026-02-27 01:03:23.175385 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-02-27 01:03:23.175395 | orchestrator | Friday 27 February 2026 01:02:00 +0000 (0:00:01.780) 0:00:25.673 ******* 2026-02-27 01:03:23.175423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-27 01:03:23.175442 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:03:23.175461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port'2026-02-27 01:03:23 | INFO  | Task 6595c2dc-cd7e-4585-ba30-13712dc7b670 is in state SUCCESS 2026-02-27 01:03:23.175476 | orchestrator | : '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-27 01:03:23.175489 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:03:23.175552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-27 01:03:23.175620 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:03:23.175634 | orchestrator | 2026-02-27 01:03:23.175646 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-02-27 01:03:23.175657 | orchestrator | Friday 27 February 2026 01:02:01 +0000 (0:00:00.745) 0:00:26.419 ******* 2026-02-27 01:03:23.175679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-27 01:03:23.175702 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:03:23.175721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-27 01:03:23.175734 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:03:23.175755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-27 01:03:23.175775 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:03:23.175786 | orchestrator | 2026-02-27 01:03:23.175796 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-02-27 01:03:23.175807 | orchestrator | Friday 27 February 2026 01:02:02 +0000 (0:00:00.942) 0:00:27.361 ******* 2026-02-27 01:03:23.175822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-27 01:03:23.175885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-27 01:03:23.175912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-27 01:03:23.175925 | orchestrator | 2026-02-27 01:03:23.175935 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-27 01:03:23.175946 | orchestrator | Friday 27 February 2026 01:02:04 +0000 (0:00:01.904) 0:00:29.266 ******* 2026-02-27 01:03:23.175958 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:03:23.175970 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:03:23.175980 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:03:23.176024 | orchestrator | 2026-02-27 01:03:23.176042 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-27 01:03:23.176054 | orchestrator | Friday 27 February 2026 01:02:04 +0000 (0:00:00.313) 0:00:29.579 ******* 2026-02-27 01:03:23.176065 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:03:23.176083 | orchestrator | 2026-02-27 01:03:23.176094 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-02-27 01:03:23.176106 | orchestrator | Friday 27 February 2026 01:02:05 +0000 (0:00:00.605) 0:00:30.185 ******* 2026-02-27 01:03:23.176116 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:03:23.176127 | orchestrator | 2026-02-27 01:03:23.176137 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-02-27 01:03:23.176148 | orchestrator | Friday 27 February 2026 01:02:07 +0000 (0:00:02.668) 0:00:32.853 ******* 2026-02-27 01:03:23.176160 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:03:23.176201 | orchestrator | 2026-02-27 01:03:23.176213 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-02-27 01:03:23.176223 | orchestrator | Friday 27 February 2026 01:02:10 +0000 (0:00:02.861) 0:00:35.715 ******* 2026-02-27 01:03:23.176236 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:03:23.176247 | orchestrator | 2026-02-27 01:03:23.176258 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-27 01:03:23.176268 | orchestrator | Friday 27 February 2026 01:02:27 +0000 (0:00:17.124) 0:00:52.840 ******* 2026-02-27 01:03:23.176278 | orchestrator | 2026-02-27 01:03:23.176289 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-27 01:03:23.176300 | orchestrator | Friday 27 February 2026 01:02:27 +0000 (0:00:00.072) 0:00:52.912 ******* 2026-02-27 01:03:23.176311 | orchestrator | 2026-02-27 01:03:23.176348 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-27 01:03:23.176360 | orchestrator | Friday 27 February 2026 01:02:27 +0000 (0:00:00.069) 0:00:52.982 ******* 2026-02-27 01:03:23.176371 | orchestrator | 2026-02-27 01:03:23.176383 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-02-27 01:03:23.176394 | orchestrator | Friday 27 February 2026 01:02:27 +0000 (0:00:00.073) 0:00:53.055 ******* 2026-02-27 01:03:23.176404 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:03:23.176415 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:03:23.176425 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:03:23.176435 | orchestrator | 2026-02-27 01:03:23.176455 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 01:03:23.176466 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-27 01:03:23.176478 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-02-27 01:03:23.176489 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-02-27 01:03:23.176501 | orchestrator | 2026-02-27 01:03:23.176530 | orchestrator | 2026-02-27 01:03:23.176541 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 01:03:23.176551 | orchestrator | Friday 27 February 2026 01:03:20 +0000 (0:00:52.362) 0:01:45.418 ******* 2026-02-27 01:03:23.176561 | orchestrator | =============================================================================== 2026-02-27 01:03:23.176571 | orchestrator | horizon : Restart horizon container ------------------------------------ 52.36s 2026-02-27 01:03:23.176582 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 17.12s 2026-02-27 01:03:23.176627 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.86s 2026-02-27 01:03:23.176641 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.67s 2026-02-27 01:03:23.176652 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.62s 2026-02-27 01:03:23.176663 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.20s 2026-02-27 01:03:23.176675 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.07s 2026-02-27 01:03:23.176686 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.90s 2026-02-27 01:03:23.176707 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.78s 2026-02-27 01:03:23.176719 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.76s 2026-02-27 01:03:23.176730 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.30s 2026-02-27 01:03:23.176741 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.94s 2026-02-27 01:03:23.176752 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.93s 2026-02-27 01:03:23.176763 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.80s 2026-02-27 01:03:23.176773 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.75s 2026-02-27 01:03:23.176784 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.61s 2026-02-27 01:03:23.176795 | orchestrator | horizon : Update policy file name --------------------------------------- 0.57s 2026-02-27 01:03:23.176807 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.57s 2026-02-27 01:03:23.176818 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.55s 2026-02-27 01:03:23.176830 | orchestrator | horizon : Update policy file name --------------------------------------- 0.51s 2026-02-27 01:03:23.176850 | orchestrator | 2026-02-27 01:03:23 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:03:23.176862 | orchestrator | 2026-02-27 01:03:23 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:03:26.233019 | orchestrator | 2026-02-27 01:03:26 | INFO  | Task 9c131f7a-a4ec-48f0-9dfa-b84b3325e63d is in state STARTED 2026-02-27 01:03:26.234357 | orchestrator | 2026-02-27 01:03:26 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:03:26.235047 | orchestrator | 2026-02-27 01:03:26 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:03:29.286508 | orchestrator | 2026-02-27 01:03:29 | INFO  | Task 9c131f7a-a4ec-48f0-9dfa-b84b3325e63d is in state STARTED 2026-02-27 01:03:29.289709 | orchestrator | 2026-02-27 01:03:29 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:03:29.289771 | orchestrator | 2026-02-27 01:03:29 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:03:32.334423 | orchestrator | 2026-02-27 01:03:32 | INFO  | Task 9c131f7a-a4ec-48f0-9dfa-b84b3325e63d is in state STARTED 2026-02-27 01:03:32.336959 | orchestrator | 2026-02-27 01:03:32 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:03:32.337327 | orchestrator | 2026-02-27 01:03:32 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:03:35.378466 | orchestrator | 2026-02-27 01:03:35 | INFO  | Task 9c131f7a-a4ec-48f0-9dfa-b84b3325e63d is in state STARTED 2026-02-27 01:03:35.379350 | orchestrator | 2026-02-27 01:03:35 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:03:35.379388 | orchestrator | 2026-02-27 01:03:35 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:03:38.429507 | orchestrator | 2026-02-27 01:03:38 | INFO  | Task 9c131f7a-a4ec-48f0-9dfa-b84b3325e63d is in state STARTED 2026-02-27 01:03:38.430736 | orchestrator | 2026-02-27 01:03:38 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:03:38.430784 | orchestrator | 2026-02-27 01:03:38 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:03:41.479100 | orchestrator | 2026-02-27 01:03:41 | INFO  | Task 9c131f7a-a4ec-48f0-9dfa-b84b3325e63d is in state SUCCESS 2026-02-27 01:03:41.481435 | orchestrator | 2026-02-27 01:03:41 | INFO  | Task 604b3b80-330a-41be-8f55-1169df8e04e2 is in state STARTED 2026-02-27 01:03:41.482574 | orchestrator | 2026-02-27 01:03:41 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:03:41.482596 | orchestrator | 2026-02-27 01:03:41 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:03:44.538476 | orchestrator | 2026-02-27 01:03:44 | INFO  | Task 604b3b80-330a-41be-8f55-1169df8e04e2 is in state STARTED 2026-02-27 01:03:44.541002 | orchestrator | 2026-02-27 01:03:44 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:03:44.541088 | orchestrator | 2026-02-27 01:03:44 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:03:47.597083 | orchestrator | 2026-02-27 01:03:47 | INFO  | Task 604b3b80-330a-41be-8f55-1169df8e04e2 is in state STARTED 2026-02-27 01:03:47.600440 | orchestrator | 2026-02-27 01:03:47 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:03:47.600506 | orchestrator | 2026-02-27 01:03:47 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:03:50.655109 | orchestrator | 2026-02-27 01:03:50 | INFO  | Task 604b3b80-330a-41be-8f55-1169df8e04e2 is in state STARTED 2026-02-27 01:03:50.658135 | orchestrator | 2026-02-27 01:03:50 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:03:50.658185 | orchestrator | 2026-02-27 01:03:50 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:03:53.708329 | orchestrator | 2026-02-27 01:03:53 | INFO  | Task 604b3b80-330a-41be-8f55-1169df8e04e2 is in state STARTED 2026-02-27 01:03:53.712523 | orchestrator | 2026-02-27 01:03:53 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:03:53.712626 | orchestrator | 2026-02-27 01:03:53 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:03:56.760093 | orchestrator | 2026-02-27 01:03:56 | INFO  | Task 604b3b80-330a-41be-8f55-1169df8e04e2 is in state STARTED 2026-02-27 01:03:56.762012 | orchestrator | 2026-02-27 01:03:56 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:03:56.762127 | orchestrator | 2026-02-27 01:03:56 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:03:59.811026 | orchestrator | 2026-02-27 01:03:59 | INFO  | Task 604b3b80-330a-41be-8f55-1169df8e04e2 is in state STARTED 2026-02-27 01:03:59.812490 | orchestrator | 2026-02-27 01:03:59 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:03:59.812615 | orchestrator | 2026-02-27 01:03:59 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:04:02.861774 | orchestrator | 2026-02-27 01:04:02 | INFO  | Task 604b3b80-330a-41be-8f55-1169df8e04e2 is in state STARTED 2026-02-27 01:04:02.864635 | orchestrator | 2026-02-27 01:04:02 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:04:02.865149 | orchestrator | 2026-02-27 01:04:02 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:04:05.917958 | orchestrator | 2026-02-27 01:04:05 | INFO  | Task 604b3b80-330a-41be-8f55-1169df8e04e2 is in state STARTED 2026-02-27 01:04:05.922535 | orchestrator | 2026-02-27 01:04:05 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:04:05.922686 | orchestrator | 2026-02-27 01:04:05 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:04:08.969834 | orchestrator | 2026-02-27 01:04:08 | INFO  | Task 604b3b80-330a-41be-8f55-1169df8e04e2 is in state STARTED 2026-02-27 01:04:08.971648 | orchestrator | 2026-02-27 01:04:08 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:04:08.971831 | orchestrator | 2026-02-27 01:04:08 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:04:12.017013 | orchestrator | 2026-02-27 01:04:12 | INFO  | Task 604b3b80-330a-41be-8f55-1169df8e04e2 is in state STARTED 2026-02-27 01:04:12.018840 | orchestrator | 2026-02-27 01:04:12 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:04:12.018876 | orchestrator | 2026-02-27 01:04:12 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:04:15.092863 | orchestrator | 2026-02-27 01:04:15 | INFO  | Task 604b3b80-330a-41be-8f55-1169df8e04e2 is in state STARTED 2026-02-27 01:04:15.095060 | orchestrator | 2026-02-27 01:04:15 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:04:15.095126 | orchestrator | 2026-02-27 01:04:15 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:04:18.134464 | orchestrator | 2026-02-27 01:04:18 | INFO  | Task 604b3b80-330a-41be-8f55-1169df8e04e2 is in state STARTED 2026-02-27 01:04:18.136316 | orchestrator | 2026-02-27 01:04:18 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:04:18.136365 | orchestrator | 2026-02-27 01:04:18 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:04:21.181581 | orchestrator | 2026-02-27 01:04:21 | INFO  | Task 604b3b80-330a-41be-8f55-1169df8e04e2 is in state STARTED 2026-02-27 01:04:21.184710 | orchestrator | 2026-02-27 01:04:21 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:04:21.184761 | orchestrator | 2026-02-27 01:04:21 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:04:24.219976 | orchestrator | 2026-02-27 01:04:24 | INFO  | Task 604b3b80-330a-41be-8f55-1169df8e04e2 is in state STARTED 2026-02-27 01:04:24.221790 | orchestrator | 2026-02-27 01:04:24 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:04:24.221822 | orchestrator | 2026-02-27 01:04:24 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:04:27.274457 | orchestrator | 2026-02-27 01:04:27 | INFO  | Task 604b3b80-330a-41be-8f55-1169df8e04e2 is in state STARTED 2026-02-27 01:04:27.275844 | orchestrator | 2026-02-27 01:04:27 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state STARTED 2026-02-27 01:04:27.275883 | orchestrator | 2026-02-27 01:04:27 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:04:30.316155 | orchestrator | 2026-02-27 01:04:30 | INFO  | Task 604b3b80-330a-41be-8f55-1169df8e04e2 is in state STARTED 2026-02-27 01:04:30.317432 | orchestrator | 2026-02-27 01:04:30 | INFO  | Task 5abcc8e2-71b8-49e0-b49a-9c87d1b2f527 is in state SUCCESS 2026-02-27 01:04:30.319445 | orchestrator | 2026-02-27 01:04:30.319495 | orchestrator | 2026-02-27 01:04:30.319508 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-02-27 01:04:30.319520 | orchestrator | 2026-02-27 01:04:30.319531 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-02-27 01:04:30.319542 | orchestrator | Friday 27 February 2026 01:03:06 +0000 (0:00:00.162) 0:00:00.162 ******* 2026-02-27 01:04:30.319553 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-02-27 01:04:30.319565 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-27 01:04:30.319576 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-27 01:04:30.319587 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-02-27 01:04:30.319597 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-27 01:04:30.319608 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-02-27 01:04:30.319698 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-02-27 01:04:30.319843 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-02-27 01:04:30.319859 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-02-27 01:04:30.319871 | orchestrator | 2026-02-27 01:04:30.319882 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-02-27 01:04:30.319893 | orchestrator | Friday 27 February 2026 01:03:11 +0000 (0:00:04.944) 0:00:05.106 ******* 2026-02-27 01:04:30.319904 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-02-27 01:04:30.319915 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-27 01:04:30.319925 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-27 01:04:30.320236 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-02-27 01:04:30.320270 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-27 01:04:30.320290 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-02-27 01:04:30.320301 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-02-27 01:04:30.320312 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-02-27 01:04:30.320323 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-02-27 01:04:30.320336 | orchestrator | 2026-02-27 01:04:30.320365 | orchestrator | TASK [Create share directory] ************************************************** 2026-02-27 01:04:30.320377 | orchestrator | Friday 27 February 2026 01:03:15 +0000 (0:00:04.272) 0:00:09.378 ******* 2026-02-27 01:04:30.320388 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-27 01:04:30.320399 | orchestrator | 2026-02-27 01:04:30.320410 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-02-27 01:04:30.320421 | orchestrator | Friday 27 February 2026 01:03:16 +0000 (0:00:00.898) 0:00:10.276 ******* 2026-02-27 01:04:30.320440 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-02-27 01:04:30.320458 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-27 01:04:30.320476 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-27 01:04:30.320494 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-02-27 01:04:30.320513 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-27 01:04:30.320532 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-02-27 01:04:30.320550 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-02-27 01:04:30.320569 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-02-27 01:04:30.320587 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-02-27 01:04:30.320600 | orchestrator | 2026-02-27 01:04:30.320611 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-02-27 01:04:30.320649 | orchestrator | Friday 27 February 2026 01:03:28 +0000 (0:00:12.667) 0:00:22.944 ******* 2026-02-27 01:04:30.320661 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-02-27 01:04:30.320671 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-02-27 01:04:30.320682 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-02-27 01:04:30.320693 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-02-27 01:04:30.320731 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-02-27 01:04:30.320743 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-02-27 01:04:30.320754 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-02-27 01:04:30.320765 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-02-27 01:04:30.320776 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-02-27 01:04:30.320786 | orchestrator | 2026-02-27 01:04:30.320797 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-02-27 01:04:30.320808 | orchestrator | Friday 27 February 2026 01:03:32 +0000 (0:00:03.891) 0:00:26.835 ******* 2026-02-27 01:04:30.320819 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-02-27 01:04:30.320830 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-27 01:04:30.320841 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-27 01:04:30.320853 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-02-27 01:04:30.320866 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-27 01:04:30.320878 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-02-27 01:04:30.320891 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-02-27 01:04:30.320904 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-02-27 01:04:30.320916 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-02-27 01:04:30.320929 | orchestrator | 2026-02-27 01:04:30.321568 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 01:04:30.321594 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 01:04:30.321606 | orchestrator | 2026-02-27 01:04:30.321639 | orchestrator | 2026-02-27 01:04:30.321652 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 01:04:30.321663 | orchestrator | Friday 27 February 2026 01:03:39 +0000 (0:00:06.839) 0:00:33.674 ******* 2026-02-27 01:04:30.321674 | orchestrator | =============================================================================== 2026-02-27 01:04:30.321685 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.67s 2026-02-27 01:04:30.321696 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.84s 2026-02-27 01:04:30.321707 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.94s 2026-02-27 01:04:30.321717 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.27s 2026-02-27 01:04:30.321728 | orchestrator | Check if target directories exist --------------------------------------- 3.89s 2026-02-27 01:04:30.321739 | orchestrator | Create share directory -------------------------------------------------- 0.90s 2026-02-27 01:04:30.321749 | orchestrator | 2026-02-27 01:04:30.321760 | orchestrator | 2026-02-27 01:04:30.321780 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-27 01:04:30.321791 | orchestrator | 2026-02-27 01:04:30.321802 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-27 01:04:30.321812 | orchestrator | Friday 27 February 2026 01:01:35 +0000 (0:00:00.267) 0:00:00.267 ******* 2026-02-27 01:04:30.321823 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:04:30.321834 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:04:30.321845 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:04:30.321856 | orchestrator | 2026-02-27 01:04:30.321866 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-27 01:04:30.321877 | orchestrator | Friday 27 February 2026 01:01:35 +0000 (0:00:00.293) 0:00:00.561 ******* 2026-02-27 01:04:30.321899 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-02-27 01:04:30.321910 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-02-27 01:04:30.321920 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-02-27 01:04:30.321931 | orchestrator | 2026-02-27 01:04:30.321942 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-02-27 01:04:30.321952 | orchestrator | 2026-02-27 01:04:30.321964 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-27 01:04:30.321974 | orchestrator | Friday 27 February 2026 01:01:35 +0000 (0:00:00.455) 0:00:01.016 ******* 2026-02-27 01:04:30.321985 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:04:30.321996 | orchestrator | 2026-02-27 01:04:30.322007 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-02-27 01:04:30.322062 | orchestrator | Friday 27 February 2026 01:01:36 +0000 (0:00:00.611) 0:00:01.627 ******* 2026-02-27 01:04:30.322129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-27 01:04:30.322155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-27 01:04:30.322185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-27 01:04:30.322219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-27 01:04:30.322240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-27 01:04:30.322315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-27 01:04:30.322339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-27 01:04:30.322363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-27 01:04:30.322386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-27 01:04:30.322416 | orchestrator | 2026-02-27 01:04:30.322431 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-02-27 01:04:30.322456 | orchestrator | Friday 27 February 2026 01:01:38 +0000 (0:00:02.000) 0:00:03.628 ******* 2026-02-27 01:04:30.322480 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:04:30.322505 | orchestrator | 2026-02-27 01:04:30.322524 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-02-27 01:04:30.322543 | orchestrator | Friday 27 February 2026 01:01:38 +0000 (0:00:00.148) 0:00:03.777 ******* 2026-02-27 01:04:30.322561 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:04:30.322579 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:04:30.322597 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:04:30.322769 | orchestrator | 2026-02-27 01:04:30.322807 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-02-27 01:04:30.322817 | orchestrator | Friday 27 February 2026 01:01:39 +0000 (0:00:00.462) 0:00:04.240 ******* 2026-02-27 01:04:30.322827 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-27 01:04:30.322836 | orchestrator | 2026-02-27 01:04:30.322846 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-27 01:04:30.322855 | orchestrator | Friday 27 February 2026 01:01:40 +0000 (0:00:00.856) 0:00:05.096 ******* 2026-02-27 01:04:30.322865 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:04:30.322875 | orchestrator | 2026-02-27 01:04:30.322884 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-02-27 01:04:30.322894 | orchestrator | Friday 27 February 2026 01:01:40 +0000 (0:00:00.570) 0:00:05.667 ******* 2026-02-27 01:04:30.322916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-27 01:04:30.322929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-27 01:04:30.322941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-27 01:04:30.322969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-27 01:04:30.322980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-27 01:04:30.322999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-27 01:04:30.323009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-27 01:04:30.323019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-27 01:04:30.323037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-27 01:04:30.323047 | orchestrator | 2026-02-27 01:04:30.323057 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-02-27 01:04:30.323066 | orchestrator | Friday 27 February 2026 01:01:44 +0000 (0:00:03.535) 0:00:09.203 ******* 2026-02-27 01:04:30.323081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-27 01:04:30.323092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-27 01:04:30.323109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-27 01:04:30.323120 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:04:30.323130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-27 01:04:30.323146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-27 01:04:30.323173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-27 01:04:30.323192 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:04:30.323210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-27 01:04:30.323236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-27 01:04:30.323256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-27 01:04:30.323282 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:04:30.323297 | orchestrator | 2026-02-27 01:04:30.323312 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-02-27 01:04:30.323334 | orchestrator | Friday 27 February 2026 01:01:44 +0000 (0:00:00.580) 0:00:09.784 ******* 2026-02-27 01:04:30.323358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-27 01:04:30.323383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-27 01:04:30.323400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-27 01:04:30.323416 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:04:30.323443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-27 01:04:30.323462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-27 01:04:30.323490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-27 01:04:30.323508 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:04:30.323530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-27 01:04:30.323541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-27 01:04:30.323551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-27 01:04:30.323561 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:04:30.323571 | orchestrator | 2026-02-27 01:04:30.323580 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-02-27 01:04:30.323597 | orchestrator | Friday 27 February 2026 01:01:45 +0000 (0:00:00.813) 0:00:10.598 ******* 2026-02-27 01:04:30.323608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-27 01:04:30.323663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-27 01:04:30.323680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-27 01:04:30.323691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-27 01:04:30.323709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-27 01:04:30.323725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-27 01:04:30.323735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-27 01:04:30.323745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-27 01:04:30.323759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-27 01:04:30.323769 | orchestrator | 2026-02-27 01:04:30.323779 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-02-27 01:04:30.323789 | orchestrator | Friday 27 February 2026 01:01:49 +0000 (0:00:03.507) 0:00:14.105 ******* 2026-02-27 01:04:30.323800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-27 01:04:30.323822 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-27 01:04:30.323833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-27 01:04:30.323844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-27 01:04:30.323859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-27 01:04:30.323869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-27 01:04:30.323885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-27 01:04:30.323901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-27 01:04:30.323911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-27 01:04:30.323921 | orchestrator | 2026-02-27 01:04:30.323931 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-02-27 01:04:30.323941 | orchestrator | Friday 27 February 2026 01:01:54 +0000 (0:00:05.720) 0:00:19.826 ******* 2026-02-27 01:04:30.323951 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:04:30.323961 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:04:30.323970 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:04:30.323980 | orchestrator | 2026-02-27 01:04:30.323989 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-02-27 01:04:30.323999 | orchestrator | Friday 27 February 2026 01:01:56 +0000 (0:00:01.467) 0:00:21.293 ******* 2026-02-27 01:04:30.324020 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:04:30.324030 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:04:30.324039 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:04:30.324057 | orchestrator | 2026-02-27 01:04:30.324067 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-02-27 01:04:30.324081 | orchestrator | Friday 27 February 2026 01:01:56 +0000 (0:00:00.596) 0:00:21.890 ******* 2026-02-27 01:04:30.324091 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:04:30.324100 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:04:30.324110 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:04:30.324119 | orchestrator | 2026-02-27 01:04:30.324129 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-02-27 01:04:30.324138 | orchestrator | Friday 27 February 2026 01:01:57 +0000 (0:00:00.343) 0:00:22.233 ******* 2026-02-27 01:04:30.324148 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:04:30.324158 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:04:30.324167 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:04:30.324177 | orchestrator | 2026-02-27 01:04:30.324186 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-02-27 01:04:30.324196 | orchestrator | Friday 27 February 2026 01:01:57 +0000 (0:00:00.531) 0:00:22.764 ******* 2026-02-27 01:04:30.324206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-27 01:04:30.324229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-27 01:04:30.324240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-27 01:04:30.324250 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:04:30.324260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-27 01:04:30.324281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-27 01:04:30.324291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-27 01:04:30.324310 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:04:30.324328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-27 01:04:30.324339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-27 01:04:30.324349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-27 01:04:30.324359 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:04:30.324369 | orchestrator | 2026-02-27 01:04:30.324378 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-27 01:04:30.324388 | orchestrator | Friday 27 February 2026 01:01:58 +0000 (0:00:00.605) 0:00:23.370 ******* 2026-02-27 01:04:30.324398 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:04:30.324407 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:04:30.324417 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:04:30.324427 | orchestrator | 2026-02-27 01:04:30.324436 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-02-27 01:04:30.324446 | orchestrator | Friday 27 February 2026 01:01:58 +0000 (0:00:00.330) 0:00:23.700 ******* 2026-02-27 01:04:30.324456 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-27 01:04:30.324469 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-27 01:04:30.324485 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-27 01:04:30.324494 | orchestrator | 2026-02-27 01:04:30.324504 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-02-27 01:04:30.324514 | orchestrator | Friday 27 February 2026 01:02:00 +0000 (0:00:01.847) 0:00:25.547 ******* 2026-02-27 01:04:30.324523 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-27 01:04:30.324533 | orchestrator | 2026-02-27 01:04:30.324543 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-02-27 01:04:30.324552 | orchestrator | Friday 27 February 2026 01:02:01 +0000 (0:00:01.021) 0:00:26.569 ******* 2026-02-27 01:04:30.324562 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:04:30.324571 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:04:30.324581 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:04:30.324590 | orchestrator | 2026-02-27 01:04:30.324600 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-02-27 01:04:30.324609 | orchestrator | Friday 27 February 2026 01:02:02 +0000 (0:00:00.897) 0:00:27.466 ******* 2026-02-27 01:04:30.324637 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-27 01:04:30.324648 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-27 01:04:30.324657 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-27 01:04:30.324667 | orchestrator | 2026-02-27 01:04:30.324676 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-02-27 01:04:30.324686 | orchestrator | Friday 27 February 2026 01:02:03 +0000 (0:00:01.462) 0:00:28.928 ******* 2026-02-27 01:04:30.324696 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:04:30.324706 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:04:30.324715 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:04:30.324725 | orchestrator | 2026-02-27 01:04:30.324734 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-02-27 01:04:30.324744 | orchestrator | Friday 27 February 2026 01:02:04 +0000 (0:00:00.317) 0:00:29.246 ******* 2026-02-27 01:04:30.324754 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-27 01:04:30.324763 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-27 01:04:30.324773 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-27 01:04:30.324782 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-27 01:04:30.324803 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-27 01:04:30.324813 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-27 01:04:30.324823 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-27 01:04:30.324833 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-27 01:04:30.324842 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-27 01:04:30.324852 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-27 01:04:30.324861 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-27 01:04:30.324871 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-27 01:04:30.324880 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-27 01:04:30.324890 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-27 01:04:30.324899 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-27 01:04:30.324915 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-27 01:04:30.324925 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-27 01:04:30.324935 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-27 01:04:30.324944 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-27 01:04:30.324954 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-27 01:04:30.324963 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-27 01:04:30.324973 | orchestrator | 2026-02-27 01:04:30.324983 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-02-27 01:04:30.324992 | orchestrator | Friday 27 February 2026 01:02:13 +0000 (0:00:09.402) 0:00:38.649 ******* 2026-02-27 01:04:30.325002 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-27 01:04:30.325011 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-27 01:04:30.325021 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-27 01:04:30.325030 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-27 01:04:30.325040 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-27 01:04:30.325053 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-27 01:04:30.325064 | orchestrator | 2026-02-27 01:04:30.325073 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-02-27 01:04:30.325083 | orchestrator | Friday 27 February 2026 01:02:16 +0000 (0:00:03.081) 0:00:41.730 ******* 2026-02-27 01:04:30.325093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-27 01:04:30.325110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-27 01:04:30.325122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-27 01:04:30.325137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-27 01:04:30.325151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-27 01:04:30.325162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-27 01:04:30.325172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-27 01:04:30.325188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-27 01:04:30.325203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-27 01:04:30.325213 | orchestrator | 2026-02-27 01:04:30.325223 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-27 01:04:30.325232 | orchestrator | Friday 27 February 2026 01:02:19 +0000 (0:00:02.716) 0:00:44.447 ******* 2026-02-27 01:04:30.325242 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:04:30.325252 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:04:30.325261 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:04:30.325271 | orchestrator | 2026-02-27 01:04:30.325281 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-02-27 01:04:30.325291 | orchestrator | Friday 27 February 2026 01:02:19 +0000 (0:00:00.322) 0:00:44.770 ******* 2026-02-27 01:04:30.325300 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:04:30.325310 | orchestrator | 2026-02-27 01:04:30.325319 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-02-27 01:04:30.325329 | orchestrator | Friday 27 February 2026 01:02:22 +0000 (0:00:02.548) 0:00:47.318 ******* 2026-02-27 01:04:30.325338 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:04:30.325348 | orchestrator | 2026-02-27 01:04:30.325357 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-02-27 01:04:30.325367 | orchestrator | Friday 27 February 2026 01:02:24 +0000 (0:00:02.454) 0:00:49.773 ******* 2026-02-27 01:04:30.325376 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:04:30.325386 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:04:30.325395 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:04:30.325405 | orchestrator | 2026-02-27 01:04:30.325415 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-02-27 01:04:30.325424 | orchestrator | Friday 27 February 2026 01:02:25 +0000 (0:00:01.124) 0:00:50.897 ******* 2026-02-27 01:04:30.325437 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:04:30.325447 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:04:30.325457 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:04:30.325466 | orchestrator | 2026-02-27 01:04:30.325476 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-02-27 01:04:30.325486 | orchestrator | Friday 27 February 2026 01:02:26 +0000 (0:00:00.321) 0:00:51.218 ******* 2026-02-27 01:04:30.325495 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:04:30.325505 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:04:30.325515 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:04:30.325524 | orchestrator | 2026-02-27 01:04:30.325534 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-02-27 01:04:30.325543 | orchestrator | Friday 27 February 2026 01:02:26 +0000 (0:00:00.333) 0:00:51.551 ******* 2026-02-27 01:04:30.325553 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:04:30.325562 | orchestrator | 2026-02-27 01:04:30.325572 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-02-27 01:04:30.325582 | orchestrator | Friday 27 February 2026 01:02:42 +0000 (0:00:16.205) 0:01:07.757 ******* 2026-02-27 01:04:30.325591 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:04:30.325601 | orchestrator | 2026-02-27 01:04:30.325610 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-27 01:04:30.325650 | orchestrator | Friday 27 February 2026 01:02:54 +0000 (0:00:11.927) 0:01:19.685 ******* 2026-02-27 01:04:30.325665 | orchestrator | 2026-02-27 01:04:30.325675 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-27 01:04:30.325685 | orchestrator | Friday 27 February 2026 01:02:54 +0000 (0:00:00.088) 0:01:19.774 ******* 2026-02-27 01:04:30.325694 | orchestrator | 2026-02-27 01:04:30.325704 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-27 01:04:30.325713 | orchestrator | Friday 27 February 2026 01:02:54 +0000 (0:00:00.075) 0:01:19.849 ******* 2026-02-27 01:04:30.325722 | orchestrator | 2026-02-27 01:04:30.325732 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-02-27 01:04:30.325741 | orchestrator | Friday 27 February 2026 01:02:54 +0000 (0:00:00.070) 0:01:19.919 ******* 2026-02-27 01:04:30.325751 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:04:30.325760 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:04:30.325770 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:04:30.325779 | orchestrator | 2026-02-27 01:04:30.325789 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-02-27 01:04:30.325799 | orchestrator | Friday 27 February 2026 01:03:10 +0000 (0:00:15.171) 0:01:35.091 ******* 2026-02-27 01:04:30.325809 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:04:30.325818 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:04:30.325828 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:04:30.325837 | orchestrator | 2026-02-27 01:04:30.325853 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-02-27 01:04:30.325863 | orchestrator | Friday 27 February 2026 01:03:20 +0000 (0:00:10.130) 0:01:45.221 ******* 2026-02-27 01:04:30.325873 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:04:30.325883 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:04:30.325892 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:04:30.325902 | orchestrator | 2026-02-27 01:04:30.325911 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-27 01:04:30.325921 | orchestrator | Friday 27 February 2026 01:03:32 +0000 (0:00:12.531) 0:01:57.753 ******* 2026-02-27 01:04:30.325931 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:04:30.325940 | orchestrator | 2026-02-27 01:04:30.325950 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-02-27 01:04:30.325959 | orchestrator | Friday 27 February 2026 01:03:33 +0000 (0:00:00.672) 0:01:58.426 ******* 2026-02-27 01:04:30.325969 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:04:30.325979 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:04:30.325988 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:04:30.325998 | orchestrator | 2026-02-27 01:04:30.326008 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-02-27 01:04:30.326065 | orchestrator | Friday 27 February 2026 01:03:34 +0000 (0:00:00.820) 0:01:59.246 ******* 2026-02-27 01:04:30.326076 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:04:30.326086 | orchestrator | 2026-02-27 01:04:30.326095 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-02-27 01:04:30.326105 | orchestrator | Friday 27 February 2026 01:03:35 +0000 (0:00:01.753) 0:02:01.000 ******* 2026-02-27 01:04:30.326115 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-02-27 01:04:30.326124 | orchestrator | 2026-02-27 01:04:30.326134 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-02-27 01:04:30.326144 | orchestrator | Friday 27 February 2026 01:03:48 +0000 (0:00:12.113) 0:02:13.113 ******* 2026-02-27 01:04:30.326153 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-02-27 01:04:30.326163 | orchestrator | 2026-02-27 01:04:30.326173 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-02-27 01:04:30.326182 | orchestrator | Friday 27 February 2026 01:04:17 +0000 (0:00:29.003) 0:02:42.117 ******* 2026-02-27 01:04:30.326192 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-02-27 01:04:30.326208 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-02-27 01:04:30.326217 | orchestrator | 2026-02-27 01:04:30.326227 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-02-27 01:04:30.326237 | orchestrator | Friday 27 February 2026 01:04:24 +0000 (0:00:07.385) 0:02:49.503 ******* 2026-02-27 01:04:30.326247 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:04:30.326256 | orchestrator | 2026-02-27 01:04:30.326266 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-02-27 01:04:30.326276 | orchestrator | Friday 27 February 2026 01:04:24 +0000 (0:00:00.112) 0:02:49.615 ******* 2026-02-27 01:04:30.326285 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:04:30.326295 | orchestrator | 2026-02-27 01:04:30.326309 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-02-27 01:04:30.326319 | orchestrator | Friday 27 February 2026 01:04:24 +0000 (0:00:00.128) 0:02:49.743 ******* 2026-02-27 01:04:30.326329 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:04:30.326338 | orchestrator | 2026-02-27 01:04:30.326348 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-02-27 01:04:30.326358 | orchestrator | Friday 27 February 2026 01:04:24 +0000 (0:00:00.105) 0:02:49.849 ******* 2026-02-27 01:04:30.326367 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:04:30.326377 | orchestrator | 2026-02-27 01:04:30.326386 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-02-27 01:04:30.326396 | orchestrator | Friday 27 February 2026 01:04:25 +0000 (0:00:00.485) 0:02:50.334 ******* 2026-02-27 01:04:30.326405 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:04:30.326415 | orchestrator | 2026-02-27 01:04:30.326425 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-27 01:04:30.326434 | orchestrator | Friday 27 February 2026 01:04:28 +0000 (0:00:03.424) 0:02:53.759 ******* 2026-02-27 01:04:30.326444 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:04:30.326453 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:04:30.326463 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:04:30.326472 | orchestrator | 2026-02-27 01:04:30.326482 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 01:04:30.326493 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-27 01:04:30.326503 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-27 01:04:30.326513 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-27 01:04:30.326523 | orchestrator | 2026-02-27 01:04:30.326532 | orchestrator | 2026-02-27 01:04:30.326542 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 01:04:30.326551 | orchestrator | Friday 27 February 2026 01:04:29 +0000 (0:00:00.464) 0:02:54.223 ******* 2026-02-27 01:04:30.326561 | orchestrator | =============================================================================== 2026-02-27 01:04:30.326570 | orchestrator | service-ks-register : keystone | Creating services --------------------- 29.00s 2026-02-27 01:04:30.326580 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 16.21s 2026-02-27 01:04:30.326595 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 15.17s 2026-02-27 01:04:30.326605 | orchestrator | keystone : Restart keystone container ---------------------------------- 12.53s 2026-02-27 01:04:30.326636 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 12.11s 2026-02-27 01:04:30.326647 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.93s 2026-02-27 01:04:30.326657 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.13s 2026-02-27 01:04:30.326666 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.40s 2026-02-27 01:04:30.326681 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.39s 2026-02-27 01:04:30.326691 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.72s 2026-02-27 01:04:30.326700 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.54s 2026-02-27 01:04:30.326710 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.51s 2026-02-27 01:04:30.326719 | orchestrator | keystone : Creating default user role ----------------------------------- 3.42s 2026-02-27 01:04:30.326729 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.08s 2026-02-27 01:04:30.326738 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.72s 2026-02-27 01:04:30.326748 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.55s 2026-02-27 01:04:30.326757 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.45s 2026-02-27 01:04:30.326767 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.00s 2026-02-27 01:04:30.326776 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.85s 2026-02-27 01:04:30.326786 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.75s 2026-02-27 01:04:30.326795 | orchestrator | 2026-02-27 01:04:30 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:04:33.344416 | orchestrator | 2026-02-27 01:04:33 | INFO  | Task 8582ac9d-cccf-4ba1-b498-31aaf8c5fb35 is in state STARTED 2026-02-27 01:04:33.345124 | orchestrator | 2026-02-27 01:04:33 | INFO  | Task 604b3b80-330a-41be-8f55-1169df8e04e2 is in state STARTED 2026-02-27 01:04:33.345608 | orchestrator | 2026-02-27 01:04:33 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:04:33.346446 | orchestrator | 2026-02-27 01:04:33 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:04:33.347297 | orchestrator | 2026-02-27 01:04:33 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:04:33.347643 | orchestrator | 2026-02-27 01:04:33 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:04:36.386364 | orchestrator | 2026-02-27 01:04:36 | INFO  | Task 8582ac9d-cccf-4ba1-b498-31aaf8c5fb35 is in state STARTED 2026-02-27 01:04:36.386768 | orchestrator | 2026-02-27 01:04:36 | INFO  | Task 604b3b80-330a-41be-8f55-1169df8e04e2 is in state STARTED 2026-02-27 01:04:36.388731 | orchestrator | 2026-02-27 01:04:36 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:04:36.389207 | orchestrator | 2026-02-27 01:04:36 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:04:36.391371 | orchestrator | 2026-02-27 01:04:36 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:04:36.391436 | orchestrator | 2026-02-27 01:04:36 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:04:39.436685 | orchestrator | 2026-02-27 01:04:39 | INFO  | Task 8582ac9d-cccf-4ba1-b498-31aaf8c5fb35 is in state STARTED 2026-02-27 01:04:39.436813 | orchestrator | 2026-02-27 01:04:39 | INFO  | Task 604b3b80-330a-41be-8f55-1169df8e04e2 is in state STARTED 2026-02-27 01:04:39.437598 | orchestrator | 2026-02-27 01:04:39 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:04:39.438241 | orchestrator | 2026-02-27 01:04:39 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:04:39.439112 | orchestrator | 2026-02-27 01:04:39 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:04:39.439126 | orchestrator | 2026-02-27 01:04:39 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:04:42.482467 | orchestrator | 2026-02-27 01:04:42 | INFO  | Task 8582ac9d-cccf-4ba1-b498-31aaf8c5fb35 is in state STARTED 2026-02-27 01:04:42.483973 | orchestrator | 2026-02-27 01:04:42 | INFO  | Task 604b3b80-330a-41be-8f55-1169df8e04e2 is in state STARTED 2026-02-27 01:04:42.486490 | orchestrator | 2026-02-27 01:04:42 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:04:42.486894 | orchestrator | 2026-02-27 01:04:42 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:04:42.489117 | orchestrator | 2026-02-27 01:04:42 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:04:42.489148 | orchestrator | 2026-02-27 01:04:42 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:04:45.539498 | orchestrator | 2026-02-27 01:04:45 | INFO  | Task 8582ac9d-cccf-4ba1-b498-31aaf8c5fb35 is in state STARTED 2026-02-27 01:04:45.542480 | orchestrator | 2026-02-27 01:04:45 | INFO  | Task 604b3b80-330a-41be-8f55-1169df8e04e2 is in state SUCCESS 2026-02-27 01:04:45.544326 | orchestrator | 2026-02-27 01:04:45 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:04:45.546668 | orchestrator | 2026-02-27 01:04:45 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:04:45.548748 | orchestrator | 2026-02-27 01:04:45 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:04:45.548858 | orchestrator | 2026-02-27 01:04:45 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:04:48.615165 | orchestrator | 2026-02-27 01:04:48 | INFO  | Task 8582ac9d-cccf-4ba1-b498-31aaf8c5fb35 is in state STARTED 2026-02-27 01:04:48.617451 | orchestrator | 2026-02-27 01:04:48 | INFO  | Task 49c29494-95ab-4ed6-b209-68e6ab1360da is in state STARTED 2026-02-27 01:04:48.619631 | orchestrator | 2026-02-27 01:04:48 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:04:48.624144 | orchestrator | 2026-02-27 01:04:48 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:04:48.627707 | orchestrator | 2026-02-27 01:04:48 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:04:48.628274 | orchestrator | 2026-02-27 01:04:48 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:04:51.682429 | orchestrator | 2026-02-27 01:04:51 | INFO  | Task 8582ac9d-cccf-4ba1-b498-31aaf8c5fb35 is in state STARTED 2026-02-27 01:04:51.687075 | orchestrator | 2026-02-27 01:04:51 | INFO  | Task 49c29494-95ab-4ed6-b209-68e6ab1360da is in state STARTED 2026-02-27 01:04:51.689082 | orchestrator | 2026-02-27 01:04:51 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:04:51.691846 | orchestrator | 2026-02-27 01:04:51 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:04:51.693744 | orchestrator | 2026-02-27 01:04:51 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:04:51.693881 | orchestrator | 2026-02-27 01:04:51 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:04:54.752008 | orchestrator | 2026-02-27 01:04:54 | INFO  | Task 8582ac9d-cccf-4ba1-b498-31aaf8c5fb35 is in state STARTED 2026-02-27 01:04:54.753926 | orchestrator | 2026-02-27 01:04:54 | INFO  | Task 49c29494-95ab-4ed6-b209-68e6ab1360da is in state STARTED 2026-02-27 01:04:54.756752 | orchestrator | 2026-02-27 01:04:54 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:04:54.759116 | orchestrator | 2026-02-27 01:04:54 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:04:54.760509 | orchestrator | 2026-02-27 01:04:54 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:04:54.760627 | orchestrator | 2026-02-27 01:04:54 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:04:57.806150 | orchestrator | 2026-02-27 01:04:57 | INFO  | Task 8582ac9d-cccf-4ba1-b498-31aaf8c5fb35 is in state STARTED 2026-02-27 01:04:57.810012 | orchestrator | 2026-02-27 01:04:57 | INFO  | Task 49c29494-95ab-4ed6-b209-68e6ab1360da is in state STARTED 2026-02-27 01:04:57.812461 | orchestrator | 2026-02-27 01:04:57 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:04:57.815864 | orchestrator | 2026-02-27 01:04:57 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:04:57.818239 | orchestrator | 2026-02-27 01:04:57 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:04:57.818287 | orchestrator | 2026-02-27 01:04:57 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:05:00.865546 | orchestrator | 2026-02-27 01:05:00 | INFO  | Task 8582ac9d-cccf-4ba1-b498-31aaf8c5fb35 is in state STARTED 2026-02-27 01:05:00.867970 | orchestrator | 2026-02-27 01:05:00 | INFO  | Task 49c29494-95ab-4ed6-b209-68e6ab1360da is in state STARTED 2026-02-27 01:05:00.871214 | orchestrator | 2026-02-27 01:05:00 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:05:00.873241 | orchestrator | 2026-02-27 01:05:00 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:05:00.876924 | orchestrator | 2026-02-27 01:05:00 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:05:00.876977 | orchestrator | 2026-02-27 01:05:00 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:05:03.922370 | orchestrator | 2026-02-27 01:05:03 | INFO  | Task 8582ac9d-cccf-4ba1-b498-31aaf8c5fb35 is in state STARTED 2026-02-27 01:05:03.923146 | orchestrator | 2026-02-27 01:05:03 | INFO  | Task 49c29494-95ab-4ed6-b209-68e6ab1360da is in state STARTED 2026-02-27 01:05:03.924567 | orchestrator | 2026-02-27 01:05:03 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:05:03.926180 | orchestrator | 2026-02-27 01:05:03 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:05:03.927264 | orchestrator | 2026-02-27 01:05:03 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:05:03.927432 | orchestrator | 2026-02-27 01:05:03 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:05:06.976848 | orchestrator | 2026-02-27 01:05:06 | INFO  | Task 8582ac9d-cccf-4ba1-b498-31aaf8c5fb35 is in state STARTED 2026-02-27 01:05:06.980192 | orchestrator | 2026-02-27 01:05:06 | INFO  | Task 49c29494-95ab-4ed6-b209-68e6ab1360da is in state STARTED 2026-02-27 01:05:06.983855 | orchestrator | 2026-02-27 01:05:06 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:05:06.986138 | orchestrator | 2026-02-27 01:05:06 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:05:06.988028 | orchestrator | 2026-02-27 01:05:06 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:05:06.988090 | orchestrator | 2026-02-27 01:05:06 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:05:10.043900 | orchestrator | 2026-02-27 01:05:10 | INFO  | Task 8582ac9d-cccf-4ba1-b498-31aaf8c5fb35 is in state STARTED 2026-02-27 01:05:10.045968 | orchestrator | 2026-02-27 01:05:10 | INFO  | Task 49c29494-95ab-4ed6-b209-68e6ab1360da is in state STARTED 2026-02-27 01:05:10.048352 | orchestrator | 2026-02-27 01:05:10 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:05:10.050731 | orchestrator | 2026-02-27 01:05:10 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:05:10.053053 | orchestrator | 2026-02-27 01:05:10 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:05:10.053796 | orchestrator | 2026-02-27 01:05:10 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:05:13.106362 | orchestrator | 2026-02-27 01:05:13 | INFO  | Task 8582ac9d-cccf-4ba1-b498-31aaf8c5fb35 is in state STARTED 2026-02-27 01:05:13.108808 | orchestrator | 2026-02-27 01:05:13 | INFO  | Task 49c29494-95ab-4ed6-b209-68e6ab1360da is in state STARTED 2026-02-27 01:05:13.110886 | orchestrator | 2026-02-27 01:05:13 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:05:13.112740 | orchestrator | 2026-02-27 01:05:13 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:05:13.114364 | orchestrator | 2026-02-27 01:05:13 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:05:13.114943 | orchestrator | 2026-02-27 01:05:13 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:05:16.156562 | orchestrator | 2026-02-27 01:05:16 | INFO  | Task 8582ac9d-cccf-4ba1-b498-31aaf8c5fb35 is in state STARTED 2026-02-27 01:05:16.158438 | orchestrator | 2026-02-27 01:05:16 | INFO  | Task 49c29494-95ab-4ed6-b209-68e6ab1360da is in state STARTED 2026-02-27 01:05:16.161588 | orchestrator | 2026-02-27 01:05:16 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:05:16.164776 | orchestrator | 2026-02-27 01:05:16 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:05:16.168068 | orchestrator | 2026-02-27 01:05:16 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:05:16.168135 | orchestrator | 2026-02-27 01:05:16 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:05:19.192211 | orchestrator | 2026-02-27 01:05:19 | INFO  | Task 8582ac9d-cccf-4ba1-b498-31aaf8c5fb35 is in state STARTED 2026-02-27 01:05:19.192302 | orchestrator | 2026-02-27 01:05:19 | INFO  | Task 49c29494-95ab-4ed6-b209-68e6ab1360da is in state STARTED 2026-02-27 01:05:19.192810 | orchestrator | 2026-02-27 01:05:19 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:05:19.195487 | orchestrator | 2026-02-27 01:05:19 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:05:19.196289 | orchestrator | 2026-02-27 01:05:19 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:05:19.196315 | orchestrator | 2026-02-27 01:05:19 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:05:22.226978 | orchestrator | 2026-02-27 01:05:22 | INFO  | Task 8582ac9d-cccf-4ba1-b498-31aaf8c5fb35 is in state STARTED 2026-02-27 01:05:22.228354 | orchestrator | 2026-02-27 01:05:22 | INFO  | Task 49c29494-95ab-4ed6-b209-68e6ab1360da is in state STARTED 2026-02-27 01:05:22.230229 | orchestrator | 2026-02-27 01:05:22 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:05:22.233589 | orchestrator | 2026-02-27 01:05:22 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:05:22.234746 | orchestrator | 2026-02-27 01:05:22 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:05:22.235720 | orchestrator | 2026-02-27 01:05:22 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:05:25.279003 | orchestrator | 2026-02-27 01:05:25 | INFO  | Task 8582ac9d-cccf-4ba1-b498-31aaf8c5fb35 is in state STARTED 2026-02-27 01:05:25.280143 | orchestrator | 2026-02-27 01:05:25 | INFO  | Task 49c29494-95ab-4ed6-b209-68e6ab1360da is in state STARTED 2026-02-27 01:05:25.280502 | orchestrator | 2026-02-27 01:05:25 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:05:25.281808 | orchestrator | 2026-02-27 01:05:25 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:05:25.283098 | orchestrator | 2026-02-27 01:05:25 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:05:25.283136 | orchestrator | 2026-02-27 01:05:25 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:05:28.327538 | orchestrator | 2026-02-27 01:05:28 | INFO  | Task 8582ac9d-cccf-4ba1-b498-31aaf8c5fb35 is in state STARTED 2026-02-27 01:05:28.328446 | orchestrator | 2026-02-27 01:05:28 | INFO  | Task 49c29494-95ab-4ed6-b209-68e6ab1360da is in state STARTED 2026-02-27 01:05:28.329804 | orchestrator | 2026-02-27 01:05:28 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:05:28.331033 | orchestrator | 2026-02-27 01:05:28 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:05:28.333513 | orchestrator | 2026-02-27 01:05:28 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:05:28.333922 | orchestrator | 2026-02-27 01:05:28 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:05:31.369881 | orchestrator | 2026-02-27 01:05:31 | INFO  | Task 8582ac9d-cccf-4ba1-b498-31aaf8c5fb35 is in state STARTED 2026-02-27 01:05:31.371671 | orchestrator | 2026-02-27 01:05:31 | INFO  | Task 49c29494-95ab-4ed6-b209-68e6ab1360da is in state STARTED 2026-02-27 01:05:31.372639 | orchestrator | 2026-02-27 01:05:31 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:05:31.375454 | orchestrator | 2026-02-27 01:05:31 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:05:31.375544 | orchestrator | 2026-02-27 01:05:31 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:05:31.375902 | orchestrator | 2026-02-27 01:05:31 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:05:34.411583 | orchestrator | 2026-02-27 01:05:34 | INFO  | Task 8582ac9d-cccf-4ba1-b498-31aaf8c5fb35 is in state STARTED 2026-02-27 01:05:34.411902 | orchestrator | 2026-02-27 01:05:34 | INFO  | Task 49c29494-95ab-4ed6-b209-68e6ab1360da is in state STARTED 2026-02-27 01:05:34.412901 | orchestrator | 2026-02-27 01:05:34 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:05:34.414091 | orchestrator | 2026-02-27 01:05:34 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:05:34.417029 | orchestrator | 2026-02-27 01:05:34 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:05:34.417095 | orchestrator | 2026-02-27 01:05:34 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:05:37.464021 | orchestrator | 2026-02-27 01:05:37 | INFO  | Task 8582ac9d-cccf-4ba1-b498-31aaf8c5fb35 is in state STARTED 2026-02-27 01:05:37.464576 | orchestrator | 2026-02-27 01:05:37 | INFO  | Task 49c29494-95ab-4ed6-b209-68e6ab1360da is in state STARTED 2026-02-27 01:05:37.465688 | orchestrator | 2026-02-27 01:05:37 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:05:37.466224 | orchestrator | 2026-02-27 01:05:37 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:05:37.467214 | orchestrator | 2026-02-27 01:05:37 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:05:37.467277 | orchestrator | 2026-02-27 01:05:37 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:05:40.502987 | orchestrator | 2026-02-27 01:05:40 | INFO  | Task 8582ac9d-cccf-4ba1-b498-31aaf8c5fb35 is in state STARTED 2026-02-27 01:05:40.504684 | orchestrator | 2026-02-27 01:05:40 | INFO  | Task 49c29494-95ab-4ed6-b209-68e6ab1360da is in state STARTED 2026-02-27 01:05:40.511643 | orchestrator | 2026-02-27 01:05:40 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:05:40.514801 | orchestrator | 2026-02-27 01:05:40 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:05:40.516414 | orchestrator | 2026-02-27 01:05:40 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:05:40.516504 | orchestrator | 2026-02-27 01:05:40 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:05:43.553700 | orchestrator | 2026-02-27 01:05:43 | INFO  | Task 8582ac9d-cccf-4ba1-b498-31aaf8c5fb35 is in state STARTED 2026-02-27 01:05:43.554235 | orchestrator | 2026-02-27 01:05:43 | INFO  | Task 49c29494-95ab-4ed6-b209-68e6ab1360da is in state STARTED 2026-02-27 01:05:43.555357 | orchestrator | 2026-02-27 01:05:43 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:05:43.557279 | orchestrator | 2026-02-27 01:05:43 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:05:43.558355 | orchestrator | 2026-02-27 01:05:43 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:05:43.558393 | orchestrator | 2026-02-27 01:05:43 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:05:46.614515 | orchestrator | 2026-02-27 01:05:46 | INFO  | Task 8582ac9d-cccf-4ba1-b498-31aaf8c5fb35 is in state STARTED 2026-02-27 01:05:46.615098 | orchestrator | 2026-02-27 01:05:46 | INFO  | Task 49c29494-95ab-4ed6-b209-68e6ab1360da is in state STARTED 2026-02-27 01:05:46.616444 | orchestrator | 2026-02-27 01:05:46 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:05:46.619316 | orchestrator | 2026-02-27 01:05:46 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:05:46.619375 | orchestrator | 2026-02-27 01:05:46 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:05:46.619387 | orchestrator | 2026-02-27 01:05:46 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:05:49.671321 | orchestrator | 2026-02-27 01:05:49 | INFO  | Task 8582ac9d-cccf-4ba1-b498-31aaf8c5fb35 is in state STARTED 2026-02-27 01:05:49.671632 | orchestrator | 2026-02-27 01:05:49 | INFO  | Task 49c29494-95ab-4ed6-b209-68e6ab1360da is in state STARTED 2026-02-27 01:05:49.672910 | orchestrator | 2026-02-27 01:05:49 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:05:49.673932 | orchestrator | 2026-02-27 01:05:49 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:05:49.674988 | orchestrator | 2026-02-27 01:05:49 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:05:49.675024 | orchestrator | 2026-02-27 01:05:49 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:05:52.719405 | orchestrator | 2026-02-27 01:05:52 | INFO  | Task 8582ac9d-cccf-4ba1-b498-31aaf8c5fb35 is in state STARTED 2026-02-27 01:05:52.720436 | orchestrator | 2026-02-27 01:05:52 | INFO  | Task 49c29494-95ab-4ed6-b209-68e6ab1360da is in state STARTED 2026-02-27 01:05:52.721412 | orchestrator | 2026-02-27 01:05:52 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:05:52.722459 | orchestrator | 2026-02-27 01:05:52 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:05:52.724038 | orchestrator | 2026-02-27 01:05:52 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:05:52.724093 | orchestrator | 2026-02-27 01:05:52 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:05:55.761634 | orchestrator | 2026-02-27 01:05:55 | INFO  | Task 8582ac9d-cccf-4ba1-b498-31aaf8c5fb35 is in state STARTED 2026-02-27 01:05:55.762315 | orchestrator | 2026-02-27 01:05:55 | INFO  | Task 49c29494-95ab-4ed6-b209-68e6ab1360da is in state STARTED 2026-02-27 01:05:55.763185 | orchestrator | 2026-02-27 01:05:55 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:05:55.764168 | orchestrator | 2026-02-27 01:05:55 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:05:55.765093 | orchestrator | 2026-02-27 01:05:55 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:05:55.766338 | orchestrator | 2026-02-27 01:05:55 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:05:58.818107 | orchestrator | 2026-02-27 01:05:58 | INFO  | Task 8582ac9d-cccf-4ba1-b498-31aaf8c5fb35 is in state STARTED 2026-02-27 01:05:58.819686 | orchestrator | 2026-02-27 01:05:58 | INFO  | Task 49c29494-95ab-4ed6-b209-68e6ab1360da is in state STARTED 2026-02-27 01:05:58.820361 | orchestrator | 2026-02-27 01:05:58 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:05:58.821134 | orchestrator | 2026-02-27 01:05:58 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:05:58.821919 | orchestrator | 2026-02-27 01:05:58 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:05:58.821943 | orchestrator | 2026-02-27 01:05:58 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:06:01.852483 | orchestrator | 2026-02-27 01:06:01 | INFO  | Task 8582ac9d-cccf-4ba1-b498-31aaf8c5fb35 is in state STARTED 2026-02-27 01:06:01.853933 | orchestrator | 2026-02-27 01:06:01 | INFO  | Task 49c29494-95ab-4ed6-b209-68e6ab1360da is in state STARTED 2026-02-27 01:06:01.854549 | orchestrator | 2026-02-27 01:06:01 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:06:01.855482 | orchestrator | 2026-02-27 01:06:01 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:06:01.857212 | orchestrator | 2026-02-27 01:06:01 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:06:01.857266 | orchestrator | 2026-02-27 01:06:01 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:06:04.893046 | orchestrator | 2026-02-27 01:06:04 | INFO  | Task 8582ac9d-cccf-4ba1-b498-31aaf8c5fb35 is in state STARTED 2026-02-27 01:06:04.893456 | orchestrator | 2026-02-27 01:06:04 | INFO  | Task 49c29494-95ab-4ed6-b209-68e6ab1360da is in state STARTED 2026-02-27 01:06:04.894317 | orchestrator | 2026-02-27 01:06:04 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:06:04.895145 | orchestrator | 2026-02-27 01:06:04 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:06:04.896068 | orchestrator | 2026-02-27 01:06:04 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:06:04.896141 | orchestrator | 2026-02-27 01:06:04 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:06:07.930580 | orchestrator | 2026-02-27 01:06:07 | INFO  | Task 8582ac9d-cccf-4ba1-b498-31aaf8c5fb35 is in state STARTED 2026-02-27 01:06:07.930924 | orchestrator | 2026-02-27 01:06:07 | INFO  | Task 49c29494-95ab-4ed6-b209-68e6ab1360da is in state STARTED 2026-02-27 01:06:07.931600 | orchestrator | 2026-02-27 01:06:07 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:06:07.932320 | orchestrator | 2026-02-27 01:06:07 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:06:07.933068 | orchestrator | 2026-02-27 01:06:07 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:06:07.934130 | orchestrator | 2026-02-27 01:06:07 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:06:10.970412 | orchestrator | 2026-02-27 01:06:10 | INFO  | Task 8582ac9d-cccf-4ba1-b498-31aaf8c5fb35 is in state STARTED 2026-02-27 01:06:10.970558 | orchestrator | 2026-02-27 01:06:10 | INFO  | Task 49c29494-95ab-4ed6-b209-68e6ab1360da is in state STARTED 2026-02-27 01:06:10.971198 | orchestrator | 2026-02-27 01:06:10 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:06:10.971791 | orchestrator | 2026-02-27 01:06:10 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:06:10.972441 | orchestrator | 2026-02-27 01:06:10 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:06:10.972565 | orchestrator | 2026-02-27 01:06:10 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:06:14.017549 | orchestrator | 2026-02-27 01:06:14 | INFO  | Task 8582ac9d-cccf-4ba1-b498-31aaf8c5fb35 is in state STARTED 2026-02-27 01:06:14.018277 | orchestrator | 2026-02-27 01:06:14 | INFO  | Task 49c29494-95ab-4ed6-b209-68e6ab1360da is in state STARTED 2026-02-27 01:06:14.019174 | orchestrator | 2026-02-27 01:06:14 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:06:14.019292 | orchestrator | 2026-02-27 01:06:14 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:06:14.020049 | orchestrator | 2026-02-27 01:06:14 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:06:14.021000 | orchestrator | 2026-02-27 01:06:14 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:06:17.086854 | orchestrator | 2026-02-27 01:06:17 | INFO  | Task 8582ac9d-cccf-4ba1-b498-31aaf8c5fb35 is in state STARTED 2026-02-27 01:06:17.087577 | orchestrator | 2026-02-27 01:06:17 | INFO  | Task 49c29494-95ab-4ed6-b209-68e6ab1360da is in state STARTED 2026-02-27 01:06:17.088796 | orchestrator | 2026-02-27 01:06:17 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:06:17.089738 | orchestrator | 2026-02-27 01:06:17 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:06:17.090917 | orchestrator | 2026-02-27 01:06:17 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:06:17.091134 | orchestrator | 2026-02-27 01:06:17 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:06:20.125242 | orchestrator | 2026-02-27 01:06:20 | INFO  | Task 8582ac9d-cccf-4ba1-b498-31aaf8c5fb35 is in state STARTED 2026-02-27 01:06:20.126142 | orchestrator | 2026-02-27 01:06:20 | INFO  | Task 49c29494-95ab-4ed6-b209-68e6ab1360da is in state STARTED 2026-02-27 01:06:20.127221 | orchestrator | 2026-02-27 01:06:20 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:06:20.128472 | orchestrator | 2026-02-27 01:06:20 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:06:20.129358 | orchestrator | 2026-02-27 01:06:20 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:06:20.129569 | orchestrator | 2026-02-27 01:06:20 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:06:23.177813 | orchestrator | 2026-02-27 01:06:23 | INFO  | Task 8582ac9d-cccf-4ba1-b498-31aaf8c5fb35 is in state STARTED 2026-02-27 01:06:23.178409 | orchestrator | 2026-02-27 01:06:23 | INFO  | Task 49c29494-95ab-4ed6-b209-68e6ab1360da is in state STARTED 2026-02-27 01:06:23.179293 | orchestrator | 2026-02-27 01:06:23 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:06:23.180060 | orchestrator | 2026-02-27 01:06:23 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:06:23.181735 | orchestrator | 2026-02-27 01:06:23 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:06:23.181796 | orchestrator | 2026-02-27 01:06:23 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:06:26.216811 | orchestrator | 2026-02-27 01:06:26 | INFO  | Task 8582ac9d-cccf-4ba1-b498-31aaf8c5fb35 is in state STARTED 2026-02-27 01:06:26.216959 | orchestrator | 2026-02-27 01:06:26 | INFO  | Task 49c29494-95ab-4ed6-b209-68e6ab1360da is in state SUCCESS 2026-02-27 01:06:26.217178 | orchestrator | 2026-02-27 01:06:26.217211 | orchestrator | 2026-02-27 01:06:26.217230 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-02-27 01:06:26.217249 | orchestrator | 2026-02-27 01:06:26.217269 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-02-27 01:06:26.217288 | orchestrator | Friday 27 February 2026 01:03:44 +0000 (0:00:00.273) 0:00:00.273 ******* 2026-02-27 01:06:26.217361 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-02-27 01:06:26.217375 | orchestrator | 2026-02-27 01:06:26.217386 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-02-27 01:06:26.217397 | orchestrator | Friday 27 February 2026 01:03:44 +0000 (0:00:00.248) 0:00:00.522 ******* 2026-02-27 01:06:26.217408 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-02-27 01:06:26.217419 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-02-27 01:06:26.217430 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-02-27 01:06:26.217442 | orchestrator | 2026-02-27 01:06:26.217453 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-02-27 01:06:26.217464 | orchestrator | Friday 27 February 2026 01:03:46 +0000 (0:00:01.375) 0:00:01.897 ******* 2026-02-27 01:06:26.217475 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-02-27 01:06:26.217485 | orchestrator | 2026-02-27 01:06:26.217496 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-02-27 01:06:26.217553 | orchestrator | Friday 27 February 2026 01:03:47 +0000 (0:00:01.663) 0:00:03.561 ******* 2026-02-27 01:06:26.217565 | orchestrator | changed: [testbed-manager] 2026-02-27 01:06:26.217576 | orchestrator | 2026-02-27 01:06:26.217587 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-02-27 01:06:26.217598 | orchestrator | Friday 27 February 2026 01:03:49 +0000 (0:00:01.020) 0:00:04.581 ******* 2026-02-27 01:06:26.217609 | orchestrator | changed: [testbed-manager] 2026-02-27 01:06:26.217620 | orchestrator | 2026-02-27 01:06:26.217631 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-02-27 01:06:26.217642 | orchestrator | Friday 27 February 2026 01:03:49 +0000 (0:00:00.976) 0:00:05.558 ******* 2026-02-27 01:06:26.217652 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-02-27 01:06:26.217664 | orchestrator | ok: [testbed-manager] 2026-02-27 01:06:26.217675 | orchestrator | 2026-02-27 01:06:26.217686 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-02-27 01:06:26.217697 | orchestrator | Friday 27 February 2026 01:04:34 +0000 (0:00:44.223) 0:00:49.781 ******* 2026-02-27 01:06:26.217708 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-02-27 01:06:26.217745 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-02-27 01:06:26.217757 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-02-27 01:06:26.217793 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-02-27 01:06:26.217806 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-02-27 01:06:26.217819 | orchestrator | 2026-02-27 01:06:26.217832 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-02-27 01:06:26.217845 | orchestrator | Friday 27 February 2026 01:04:38 +0000 (0:00:04.753) 0:00:54.535 ******* 2026-02-27 01:06:26.217857 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-02-27 01:06:26.217870 | orchestrator | 2026-02-27 01:06:26.217882 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-02-27 01:06:26.217895 | orchestrator | Friday 27 February 2026 01:04:39 +0000 (0:00:00.434) 0:00:54.969 ******* 2026-02-27 01:06:26.217907 | orchestrator | skipping: [testbed-manager] 2026-02-27 01:06:26.217919 | orchestrator | 2026-02-27 01:06:26.217932 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-02-27 01:06:26.217945 | orchestrator | Friday 27 February 2026 01:04:39 +0000 (0:00:00.125) 0:00:55.095 ******* 2026-02-27 01:06:26.217958 | orchestrator | skipping: [testbed-manager] 2026-02-27 01:06:26.217971 | orchestrator | 2026-02-27 01:06:26.217983 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-02-27 01:06:26.217995 | orchestrator | Friday 27 February 2026 01:04:39 +0000 (0:00:00.482) 0:00:55.578 ******* 2026-02-27 01:06:26.218007 | orchestrator | changed: [testbed-manager] 2026-02-27 01:06:26.218080 | orchestrator | 2026-02-27 01:06:26.218094 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-02-27 01:06:26.218146 | orchestrator | Friday 27 February 2026 01:04:41 +0000 (0:00:01.602) 0:00:57.180 ******* 2026-02-27 01:06:26.218159 | orchestrator | changed: [testbed-manager] 2026-02-27 01:06:26.218172 | orchestrator | 2026-02-27 01:06:26.218183 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-02-27 01:06:26.218194 | orchestrator | Friday 27 February 2026 01:04:42 +0000 (0:00:00.821) 0:00:58.001 ******* 2026-02-27 01:06:26.218205 | orchestrator | changed: [testbed-manager] 2026-02-27 01:06:26.218215 | orchestrator | 2026-02-27 01:06:26.218238 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-02-27 01:06:26.218249 | orchestrator | Friday 27 February 2026 01:04:43 +0000 (0:00:00.664) 0:00:58.666 ******* 2026-02-27 01:06:26.218260 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-02-27 01:06:26.218271 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-02-27 01:06:26.218282 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-02-27 01:06:26.218293 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-02-27 01:06:26.218304 | orchestrator | 2026-02-27 01:06:26.218315 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 01:06:26.218326 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-27 01:06:26.218338 | orchestrator | 2026-02-27 01:06:26.218348 | orchestrator | 2026-02-27 01:06:26.218374 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 01:06:26.218386 | orchestrator | Friday 27 February 2026 01:04:44 +0000 (0:00:01.710) 0:01:00.376 ******* 2026-02-27 01:06:26.218397 | orchestrator | =============================================================================== 2026-02-27 01:06:26.218408 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 44.22s 2026-02-27 01:06:26.218418 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.75s 2026-02-27 01:06:26.218429 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.71s 2026-02-27 01:06:26.218440 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.66s 2026-02-27 01:06:26.218451 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.60s 2026-02-27 01:06:26.218472 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.38s 2026-02-27 01:06:26.218483 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.02s 2026-02-27 01:06:26.218494 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.98s 2026-02-27 01:06:26.218505 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.82s 2026-02-27 01:06:26.218515 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.66s 2026-02-27 01:06:26.218526 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.48s 2026-02-27 01:06:26.218537 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.43s 2026-02-27 01:06:26.218547 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.25s 2026-02-27 01:06:26.218558 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2026-02-27 01:06:26.218569 | orchestrator | 2026-02-27 01:06:26.218580 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-27 01:06:26.218591 | orchestrator | 2.16.14 2026-02-27 01:06:26.218601 | orchestrator | 2026-02-27 01:06:26.218612 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-02-27 01:06:26.218623 | orchestrator | 2026-02-27 01:06:26.218634 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-02-27 01:06:26.218644 | orchestrator | Friday 27 February 2026 01:04:49 +0000 (0:00:00.296) 0:00:00.296 ******* 2026-02-27 01:06:26.218655 | orchestrator | changed: [testbed-manager] 2026-02-27 01:06:26.218666 | orchestrator | 2026-02-27 01:06:26.218677 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-02-27 01:06:26.218688 | orchestrator | Friday 27 February 2026 01:04:51 +0000 (0:00:01.420) 0:00:01.717 ******* 2026-02-27 01:06:26.218698 | orchestrator | changed: [testbed-manager] 2026-02-27 01:06:26.218709 | orchestrator | 2026-02-27 01:06:26.218720 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-02-27 01:06:26.218731 | orchestrator | Friday 27 February 2026 01:04:52 +0000 (0:00:01.140) 0:00:02.857 ******* 2026-02-27 01:06:26.218742 | orchestrator | changed: [testbed-manager] 2026-02-27 01:06:26.218752 | orchestrator | 2026-02-27 01:06:26.218781 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-02-27 01:06:26.218792 | orchestrator | Friday 27 February 2026 01:04:53 +0000 (0:00:01.141) 0:00:03.998 ******* 2026-02-27 01:06:26.218803 | orchestrator | changed: [testbed-manager] 2026-02-27 01:06:26.218814 | orchestrator | 2026-02-27 01:06:26.218825 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-02-27 01:06:26.218836 | orchestrator | Friday 27 February 2026 01:04:54 +0000 (0:00:01.305) 0:00:05.304 ******* 2026-02-27 01:06:26.218846 | orchestrator | changed: [testbed-manager] 2026-02-27 01:06:26.218857 | orchestrator | 2026-02-27 01:06:26.218868 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-02-27 01:06:26.218879 | orchestrator | Friday 27 February 2026 01:04:55 +0000 (0:00:01.145) 0:00:06.449 ******* 2026-02-27 01:06:26.218889 | orchestrator | changed: [testbed-manager] 2026-02-27 01:06:26.218900 | orchestrator | 2026-02-27 01:06:26.218911 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-02-27 01:06:26.218922 | orchestrator | Friday 27 February 2026 01:04:57 +0000 (0:00:01.194) 0:00:07.644 ******* 2026-02-27 01:06:26.218932 | orchestrator | changed: [testbed-manager] 2026-02-27 01:06:26.218943 | orchestrator | 2026-02-27 01:06:26.218954 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-02-27 01:06:26.218964 | orchestrator | Friday 27 February 2026 01:04:59 +0000 (0:00:02.160) 0:00:09.804 ******* 2026-02-27 01:06:26.218975 | orchestrator | changed: [testbed-manager] 2026-02-27 01:06:26.218986 | orchestrator | 2026-02-27 01:06:26.218997 | orchestrator | TASK [Create admin user] ******************************************************* 2026-02-27 01:06:26.219007 | orchestrator | Friday 27 February 2026 01:05:00 +0000 (0:00:01.350) 0:00:11.154 ******* 2026-02-27 01:06:26.219026 | orchestrator | changed: [testbed-manager] 2026-02-27 01:06:26.219037 | orchestrator | 2026-02-27 01:06:26.219048 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-02-27 01:06:26.219059 | orchestrator | Friday 27 February 2026 01:06:01 +0000 (0:01:00.440) 0:01:11.595 ******* 2026-02-27 01:06:26.219075 | orchestrator | skipping: [testbed-manager] 2026-02-27 01:06:26.219086 | orchestrator | 2026-02-27 01:06:26.219097 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-27 01:06:26.219107 | orchestrator | 2026-02-27 01:06:26.219118 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-27 01:06:26.219129 | orchestrator | Friday 27 February 2026 01:06:01 +0000 (0:00:00.215) 0:01:11.810 ******* 2026-02-27 01:06:26.219139 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:06:26.219150 | orchestrator | 2026-02-27 01:06:26.219161 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-27 01:06:26.219171 | orchestrator | 2026-02-27 01:06:26.219182 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-27 01:06:26.219193 | orchestrator | Friday 27 February 2026 01:06:12 +0000 (0:00:11.617) 0:01:23.428 ******* 2026-02-27 01:06:26.219204 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:06:26.219214 | orchestrator | 2026-02-27 01:06:26.219232 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-27 01:06:26.219243 | orchestrator | 2026-02-27 01:06:26.219254 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-27 01:06:26.219312 | orchestrator | Friday 27 February 2026 01:06:24 +0000 (0:00:11.481) 0:01:34.910 ******* 2026-02-27 01:06:26.219325 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:06:26.219336 | orchestrator | 2026-02-27 01:06:26.219347 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 01:06:26.219357 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-27 01:06:26.219368 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 01:06:26.219379 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 01:06:26.219391 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 01:06:26.219401 | orchestrator | 2026-02-27 01:06:26.219412 | orchestrator | 2026-02-27 01:06:26.219423 | orchestrator | 2026-02-27 01:06:26.219433 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 01:06:26.219445 | orchestrator | Friday 27 February 2026 01:06:25 +0000 (0:00:01.162) 0:01:36.072 ******* 2026-02-27 01:06:26.219455 | orchestrator | =============================================================================== 2026-02-27 01:06:26.219507 | orchestrator | Create admin user ------------------------------------------------------ 60.44s 2026-02-27 01:06:26.219519 | orchestrator | Restart ceph manager service ------------------------------------------- 24.26s 2026-02-27 01:06:26.219530 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.16s 2026-02-27 01:06:26.219541 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.42s 2026-02-27 01:06:26.219552 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.35s 2026-02-27 01:06:26.219563 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.31s 2026-02-27 01:06:26.219573 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.19s 2026-02-27 01:06:26.219584 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.15s 2026-02-27 01:06:26.219595 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.14s 2026-02-27 01:06:26.219614 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.14s 2026-02-27 01:06:26.219626 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.22s 2026-02-27 01:06:26.219636 | orchestrator | 2026-02-27 01:06:26 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:06:26.219740 | orchestrator | 2026-02-27 01:06:26 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:06:26.222221 | orchestrator | 2026-02-27 01:06:26 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:06:26.222263 | orchestrator | 2026-02-27 01:06:26 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:06:29.246674 | orchestrator | 2026-02-27 01:06:29 | INFO  | Task 8582ac9d-cccf-4ba1-b498-31aaf8c5fb35 is in state STARTED 2026-02-27 01:06:29.247084 | orchestrator | 2026-02-27 01:06:29 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:06:29.247700 | orchestrator | 2026-02-27 01:06:29 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:06:29.248556 | orchestrator | 2026-02-27 01:06:29 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:06:29.248582 | orchestrator | 2026-02-27 01:06:29 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:06:32.284459 | orchestrator | 2026-02-27 01:06:32 | INFO  | Task 8582ac9d-cccf-4ba1-b498-31aaf8c5fb35 is in state STARTED 2026-02-27 01:06:32.285226 | orchestrator | 2026-02-27 01:06:32 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:06:32.286745 | orchestrator | 2026-02-27 01:06:32 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:06:32.287642 | orchestrator | 2026-02-27 01:06:32 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:06:32.287679 | orchestrator | 2026-02-27 01:06:32 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:06:35.317575 | orchestrator | 2026-02-27 01:06:35 | INFO  | Task 8582ac9d-cccf-4ba1-b498-31aaf8c5fb35 is in state SUCCESS 2026-02-27 01:06:35.320692 | orchestrator | 2026-02-27 01:06:35 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:06:35.321587 | orchestrator | 2026-02-27 01:06:35 | INFO  | Task 2005c553-5f42-4dd9-a0ee-e72606aec97f is in state STARTED 2026-02-27 01:06:35.324297 | orchestrator | 2026-02-27 01:06:35 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:06:35.324838 | orchestrator | 2026-02-27 01:06:35 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:06:35.325797 | orchestrator | 2026-02-27 01:06:35 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:06:38.355707 | orchestrator | 2026-02-27 01:06:38 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:06:38.356010 | orchestrator | 2026-02-27 01:06:38 | INFO  | Task 2005c553-5f42-4dd9-a0ee-e72606aec97f is in state STARTED 2026-02-27 01:06:38.356649 | orchestrator | 2026-02-27 01:06:38 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:06:38.357264 | orchestrator | 2026-02-27 01:06:38 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:06:38.357373 | orchestrator | 2026-02-27 01:06:38 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:06:41.391300 | orchestrator | 2026-02-27 01:06:41 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:06:41.392190 | orchestrator | 2026-02-27 01:06:41 | INFO  | Task 2005c553-5f42-4dd9-a0ee-e72606aec97f is in state STARTED 2026-02-27 01:06:41.393320 | orchestrator | 2026-02-27 01:06:41 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:06:41.394756 | orchestrator | 2026-02-27 01:06:41 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:06:41.394877 | orchestrator | 2026-02-27 01:06:41 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:06:44.440869 | orchestrator | 2026-02-27 01:06:44 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:06:44.444051 | orchestrator | 2026-02-27 01:06:44 | INFO  | Task 2005c553-5f42-4dd9-a0ee-e72606aec97f is in state STARTED 2026-02-27 01:06:44.446930 | orchestrator | 2026-02-27 01:06:44 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:06:44.448511 | orchestrator | 2026-02-27 01:06:44 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:06:44.448670 | orchestrator | 2026-02-27 01:06:44 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:06:47.487306 | orchestrator | 2026-02-27 01:06:47 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:06:47.487440 | orchestrator | 2026-02-27 01:06:47 | INFO  | Task 2005c553-5f42-4dd9-a0ee-e72606aec97f is in state STARTED 2026-02-27 01:06:47.489033 | orchestrator | 2026-02-27 01:06:47 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:06:47.489644 | orchestrator | 2026-02-27 01:06:47 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:06:47.489680 | orchestrator | 2026-02-27 01:06:47 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:06:50.520446 | orchestrator | 2026-02-27 01:06:50 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:06:50.523932 | orchestrator | 2026-02-27 01:06:50 | INFO  | Task 2005c553-5f42-4dd9-a0ee-e72606aec97f is in state STARTED 2026-02-27 01:06:50.524512 | orchestrator | 2026-02-27 01:06:50 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:06:50.525662 | orchestrator | 2026-02-27 01:06:50 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:06:50.525763 | orchestrator | 2026-02-27 01:06:50 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:06:53.553072 | orchestrator | 2026-02-27 01:06:53 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:06:53.554243 | orchestrator | 2026-02-27 01:06:53 | INFO  | Task 2005c553-5f42-4dd9-a0ee-e72606aec97f is in state STARTED 2026-02-27 01:06:53.555398 | orchestrator | 2026-02-27 01:06:53 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:06:53.556293 | orchestrator | 2026-02-27 01:06:53 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:06:53.556331 | orchestrator | 2026-02-27 01:06:53 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:06:56.590104 | orchestrator | 2026-02-27 01:06:56 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:06:56.590617 | orchestrator | 2026-02-27 01:06:56 | INFO  | Task 2005c553-5f42-4dd9-a0ee-e72606aec97f is in state STARTED 2026-02-27 01:06:56.591335 | orchestrator | 2026-02-27 01:06:56 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:06:56.592435 | orchestrator | 2026-02-27 01:06:56 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:06:56.592520 | orchestrator | 2026-02-27 01:06:56 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:06:59.625499 | orchestrator | 2026-02-27 01:06:59 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:06:59.626501 | orchestrator | 2026-02-27 01:06:59 | INFO  | Task 2005c553-5f42-4dd9-a0ee-e72606aec97f is in state STARTED 2026-02-27 01:06:59.627515 | orchestrator | 2026-02-27 01:06:59 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:06:59.628937 | orchestrator | 2026-02-27 01:06:59 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:06:59.628991 | orchestrator | 2026-02-27 01:06:59 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:07:02.672876 | orchestrator | 2026-02-27 01:07:02 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:07:02.673174 | orchestrator | 2026-02-27 01:07:02 | INFO  | Task 2005c553-5f42-4dd9-a0ee-e72606aec97f is in state STARTED 2026-02-27 01:07:02.674223 | orchestrator | 2026-02-27 01:07:02 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:07:02.674979 | orchestrator | 2026-02-27 01:07:02 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:07:02.675027 | orchestrator | 2026-02-27 01:07:02 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:07:05.712803 | orchestrator | 2026-02-27 01:07:05 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:07:05.713501 | orchestrator | 2026-02-27 01:07:05 | INFO  | Task 2005c553-5f42-4dd9-a0ee-e72606aec97f is in state STARTED 2026-02-27 01:07:05.715462 | orchestrator | 2026-02-27 01:07:05 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:07:05.716958 | orchestrator | 2026-02-27 01:07:05 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:07:05.717222 | orchestrator | 2026-02-27 01:07:05 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:07:08.760075 | orchestrator | 2026-02-27 01:07:08 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:07:08.760424 | orchestrator | 2026-02-27 01:07:08 | INFO  | Task 2005c553-5f42-4dd9-a0ee-e72606aec97f is in state STARTED 2026-02-27 01:07:08.761195 | orchestrator | 2026-02-27 01:07:08 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:07:08.761886 | orchestrator | 2026-02-27 01:07:08 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state STARTED 2026-02-27 01:07:08.762123 | orchestrator | 2026-02-27 01:07:08 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:07:11.796544 | orchestrator | 2026-02-27 01:07:11 | INFO  | Task 5967a692-43e0-454d-baf1-9d1efa894588 is in state STARTED 2026-02-27 01:07:11.797115 | orchestrator | 2026-02-27 01:07:11 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:07:11.797958 | orchestrator | 2026-02-27 01:07:11 | INFO  | Task 2005c553-5f42-4dd9-a0ee-e72606aec97f is in state STARTED 2026-02-27 01:07:11.798730 | orchestrator | 2026-02-27 01:07:11 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:07:11.800457 | orchestrator | 2026-02-27 01:07:11 | INFO  | Task 0cd3d4d7-51d6-41ea-94a0-37478fd6275d is in state SUCCESS 2026-02-27 01:07:11.802410 | orchestrator | 2026-02-27 01:07:11.802460 | orchestrator | 2026-02-27 01:07:11.802472 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-02-27 01:07:11.802485 | orchestrator | 2026-02-27 01:07:11.802496 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-02-27 01:07:11.802508 | orchestrator | Friday 27 February 2026 01:04:33 +0000 (0:00:00.093) 0:00:00.093 ******* 2026-02-27 01:07:11.802520 | orchestrator | changed: [localhost] 2026-02-27 01:07:11.802532 | orchestrator | 2026-02-27 01:07:11.802592 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-02-27 01:07:11.802612 | orchestrator | Friday 27 February 2026 01:04:34 +0000 (0:00:01.297) 0:00:01.390 ******* 2026-02-27 01:07:11.802631 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2026-02-27 01:07:11.802648 | orchestrator | changed: [localhost] 2026-02-27 01:07:11.802667 | orchestrator | 2026-02-27 01:07:11.802685 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-02-27 01:07:11.802705 | orchestrator | Friday 27 February 2026 01:06:03 +0000 (0:01:28.308) 0:01:29.699 ******* 2026-02-27 01:07:11.802723 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent kernel (3 retries left). 2026-02-27 01:07:11.802742 | orchestrator | changed: [localhost] 2026-02-27 01:07:11.802921 | orchestrator | 2026-02-27 01:07:11.802941 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-27 01:07:11.802958 | orchestrator | 2026-02-27 01:07:11.802973 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-27 01:07:11.802991 | orchestrator | Friday 27 February 2026 01:06:31 +0000 (0:00:28.463) 0:01:58.162 ******* 2026-02-27 01:07:11.803003 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:07:11.803015 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:07:11.803026 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:07:11.803037 | orchestrator | 2026-02-27 01:07:11.803049 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-27 01:07:11.803061 | orchestrator | Friday 27 February 2026 01:06:31 +0000 (0:00:00.420) 0:01:58.583 ******* 2026-02-27 01:07:11.803072 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-02-27 01:07:11.803083 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-02-27 01:07:11.803095 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-02-27 01:07:11.803106 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-02-27 01:07:11.803117 | orchestrator | 2026-02-27 01:07:11.803129 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-02-27 01:07:11.803141 | orchestrator | skipping: no hosts matched 2026-02-27 01:07:11.803153 | orchestrator | 2026-02-27 01:07:11.803164 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 01:07:11.803175 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 01:07:11.803189 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 01:07:11.803201 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 01:07:11.803213 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 01:07:11.803225 | orchestrator | 2026-02-27 01:07:11.803236 | orchestrator | 2026-02-27 01:07:11.803247 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 01:07:11.803259 | orchestrator | Friday 27 February 2026 01:06:32 +0000 (0:00:00.596) 0:01:59.180 ******* 2026-02-27 01:07:11.803269 | orchestrator | =============================================================================== 2026-02-27 01:07:11.803280 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 88.31s 2026-02-27 01:07:11.803292 | orchestrator | Download ironic-agent kernel ------------------------------------------- 28.46s 2026-02-27 01:07:11.803303 | orchestrator | Ensure the destination directory exists --------------------------------- 1.30s 2026-02-27 01:07:11.803314 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.60s 2026-02-27 01:07:11.803325 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.42s 2026-02-27 01:07:11.803337 | orchestrator | 2026-02-27 01:07:11.803348 | orchestrator | 2026-02-27 01:07:11.803358 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-27 01:07:11.803380 | orchestrator | 2026-02-27 01:07:11.803389 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-27 01:07:11.803399 | orchestrator | Friday 27 February 2026 01:04:33 +0000 (0:00:00.523) 0:00:00.523 ******* 2026-02-27 01:07:11.803408 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:07:11.803418 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:07:11.803427 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:07:11.803437 | orchestrator | 2026-02-27 01:07:11.803446 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-27 01:07:11.803456 | orchestrator | Friday 27 February 2026 01:04:34 +0000 (0:00:00.582) 0:00:01.106 ******* 2026-02-27 01:07:11.803465 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-02-27 01:07:11.803475 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-02-27 01:07:11.803484 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-02-27 01:07:11.803493 | orchestrator | 2026-02-27 01:07:11.803503 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-02-27 01:07:11.803513 | orchestrator | 2026-02-27 01:07:11.803522 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-27 01:07:11.803532 | orchestrator | Friday 27 February 2026 01:04:35 +0000 (0:00:00.693) 0:00:01.799 ******* 2026-02-27 01:07:11.803557 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:07:11.803567 | orchestrator | 2026-02-27 01:07:11.803577 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-02-27 01:07:11.803586 | orchestrator | Friday 27 February 2026 01:04:36 +0000 (0:00:00.934) 0:00:02.734 ******* 2026-02-27 01:07:11.803596 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-02-27 01:07:11.803606 | orchestrator | 2026-02-27 01:07:11.803615 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-02-27 01:07:11.803632 | orchestrator | Friday 27 February 2026 01:04:40 +0000 (0:00:04.408) 0:00:07.143 ******* 2026-02-27 01:07:11.803643 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-02-27 01:07:11.803653 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-02-27 01:07:11.803662 | orchestrator | 2026-02-27 01:07:11.803672 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-02-27 01:07:11.803682 | orchestrator | Friday 27 February 2026 01:04:48 +0000 (0:00:07.885) 0:00:15.028 ******* 2026-02-27 01:07:11.803691 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating projects (5 retries left). 2026-02-27 01:07:11.803701 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-27 01:07:11.803710 | orchestrator | 2026-02-27 01:07:11.803720 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-02-27 01:07:11.803729 | orchestrator | Friday 27 February 2026 01:05:05 +0000 (0:00:17.199) 0:00:32.227 ******* 2026-02-27 01:07:11.803739 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-27 01:07:11.803748 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-02-27 01:07:11.803757 | orchestrator | 2026-02-27 01:07:11.803767 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-02-27 01:07:11.803776 | orchestrator | Friday 27 February 2026 01:05:10 +0000 (0:00:05.069) 0:00:37.296 ******* 2026-02-27 01:07:11.803786 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-27 01:07:11.803796 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-02-27 01:07:11.803805 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-02-27 01:07:11.803901 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-02-27 01:07:11.803914 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-02-27 01:07:11.803924 | orchestrator | 2026-02-27 01:07:11.803933 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-02-27 01:07:11.803951 | orchestrator | Friday 27 February 2026 01:05:27 +0000 (0:00:17.304) 0:00:54.600 ******* 2026-02-27 01:07:11.803961 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-02-27 01:07:11.803970 | orchestrator | 2026-02-27 01:07:11.803980 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-02-27 01:07:11.803989 | orchestrator | Friday 27 February 2026 01:05:32 +0000 (0:00:04.216) 0:00:58.817 ******* 2026-02-27 01:07:11.804003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-27 01:07:11.804017 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-27 01:07:11.804043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-27 01:07:11.804055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:11.804072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:11.804082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:11.804091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:11.804103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:11.804124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:11.804134 | orchestrator | 2026-02-27 01:07:11.804144 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-02-27 01:07:11.804154 | orchestrator | Friday 27 February 2026 01:05:34 +0000 (0:00:02.447) 0:01:01.265 ******* 2026-02-27 01:07:11.804164 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-02-27 01:07:11.804174 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-02-27 01:07:11.804184 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-02-27 01:07:11.804193 | orchestrator | 2026-02-27 01:07:11.804203 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-02-27 01:07:11.804212 | orchestrator | Friday 27 February 2026 01:05:36 +0000 (0:00:02.061) 0:01:03.326 ******* 2026-02-27 01:07:11.804228 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:07:11.804237 | orchestrator | 2026-02-27 01:07:11.804247 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-02-27 01:07:11.804256 | orchestrator | Friday 27 February 2026 01:05:36 +0000 (0:00:00.158) 0:01:03.484 ******* 2026-02-27 01:07:11.804266 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:07:11.804276 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:07:11.804285 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:07:11.804295 | orchestrator | 2026-02-27 01:07:11.804305 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-27 01:07:11.804314 | orchestrator | Friday 27 February 2026 01:05:37 +0000 (0:00:00.959) 0:01:04.445 ******* 2026-02-27 01:07:11.804324 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:07:11.804334 | orchestrator | 2026-02-27 01:07:11.804343 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-02-27 01:07:11.804353 | orchestrator | Friday 27 February 2026 01:05:39 +0000 (0:00:01.662) 0:01:06.107 ******* 2026-02-27 01:07:11.804363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-27 01:07:11.804374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-27 01:07:11.804395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-27 01:07:11.804420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:11.804431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:11.804441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:11.804451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:11.804461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:11.804478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:11.804488 | orchestrator | 2026-02-27 01:07:11.804498 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-02-27 01:07:11.804524 | orchestrator | Friday 27 February 2026 01:05:44 +0000 (0:00:05.010) 0:01:11.118 ******* 2026-02-27 01:07:11.804535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-27 01:07:11.804546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-27 01:07:11.804556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-27 01:07:11.804566 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:07:11.804576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-27 01:07:11.804593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-27 01:07:11.804614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-27 01:07:11.804624 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:07:11.804634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-27 01:07:11.804644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-27 01:07:11.804654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-27 01:07:11.804664 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:07:11.804674 | orchestrator | 2026-02-27 01:07:11.804684 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-02-27 01:07:11.804694 | orchestrator | Friday 27 February 2026 01:05:46 +0000 (0:00:01.968) 0:01:13.086 ******* 2026-02-27 01:07:11.804709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-27 01:07:11.804730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-27 01:07:11.804744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-27 01:07:11.804761 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:07:11.804777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-27 01:07:11.804795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-27 01:07:11.804836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-27 01:07:11.804853 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:07:11.805193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-27 01:07:11.805223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-27 01:07:11.805233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-27 01:07:11.805243 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:07:11.805253 | orchestrator | 2026-02-27 01:07:11.805263 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-02-27 01:07:11.805273 | orchestrator | Friday 27 February 2026 01:05:49 +0000 (0:00:03.172) 0:01:16.258 ******* 2026-02-27 01:07:11.805283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-27 01:07:11.805294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-27 01:07:11.805322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-27 01:07:11.805333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:11.805344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:11.805354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:11.805364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:11.805379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:11.805399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:11.805409 | orchestrator | 2026-02-27 01:07:11.805419 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-02-27 01:07:11.805429 | orchestrator | Friday 27 February 2026 01:05:54 +0000 (0:00:04.675) 0:01:20.933 ******* 2026-02-27 01:07:11.805438 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:07:11.805448 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:07:11.805458 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:07:11.805467 | orchestrator | 2026-02-27 01:07:11.805477 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-02-27 01:07:11.805487 | orchestrator | Friday 27 February 2026 01:05:59 +0000 (0:00:05.449) 0:01:26.383 ******* 2026-02-27 01:07:11.805496 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-27 01:07:11.805506 | orchestrator | 2026-02-27 01:07:11.805515 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-02-27 01:07:11.805525 | orchestrator | Friday 27 February 2026 01:06:01 +0000 (0:00:01.980) 0:01:28.363 ******* 2026-02-27 01:07:11.805534 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:07:11.805544 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:07:11.805554 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:07:11.805570 | orchestrator | 2026-02-27 01:07:11.805585 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-02-27 01:07:11.805610 | orchestrator | Friday 27 February 2026 01:06:02 +0000 (0:00:00.814) 0:01:29.177 ******* 2026-02-27 01:07:11.805629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-27 01:07:11.805646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-27 01:07:11.805684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-27 01:07:11.805708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:11.805720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:11.805730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:11.805740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:11.805756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:11.805766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:11.805776 | orchestrator | 2026-02-27 01:07:11.805788 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-02-27 01:07:11.805800 | orchestrator | Friday 27 February 2026 01:06:14 +0000 (0:00:11.525) 0:01:40.703 ******* 2026-02-27 01:07:11.805851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-27 01:07:11.805865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-27 01:07:11.805877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-27 01:07:11.805889 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:07:11.805913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-27 01:07:11.805925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-27 01:07:11.805946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-27 01:07:11.805962 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:07:11.805974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-27 01:07:11.805985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-27 01:07:11.805997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-27 01:07:11.806013 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:07:11.806083 | orchestrator | 2026-02-27 01:07:11.806094 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-02-27 01:07:11.806106 | orchestrator | Friday 27 February 2026 01:06:14 +0000 (0:00:00.744) 0:01:41.447 ******* 2026-02-27 01:07:11.806120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-27 01:07:11.806144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-27 01:07:11.806155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-27 01:07:11.806165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:11.806182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:11.806192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:11.806207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:11.806222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:11.806232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:11.806242 | orchestrator | 2026-02-27 01:07:11.806253 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-27 01:07:11.806262 | orchestrator | Friday 27 February 2026 01:06:18 +0000 (0:00:04.115) 0:01:45.563 ******* 2026-02-27 01:07:11.806272 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:07:11.806282 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:07:11.806297 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:07:11.806307 | orchestrator | 2026-02-27 01:07:11.806316 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-02-27 01:07:11.806326 | orchestrator | Friday 27 February 2026 01:06:19 +0000 (0:00:00.327) 0:01:45.890 ******* 2026-02-27 01:07:11.806336 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:07:11.806345 | orchestrator | 2026-02-27 01:07:11.806355 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-02-27 01:07:11.806364 | orchestrator | Friday 27 February 2026 01:06:21 +0000 (0:00:02.664) 0:01:48.554 ******* 2026-02-27 01:07:11.806374 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:07:11.806384 | orchestrator | 2026-02-27 01:07:11.806393 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-02-27 01:07:11.806403 | orchestrator | Friday 27 February 2026 01:06:24 +0000 (0:00:02.549) 0:01:51.104 ******* 2026-02-27 01:07:11.806413 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:07:11.806422 | orchestrator | 2026-02-27 01:07:11.806432 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-27 01:07:11.806441 | orchestrator | Friday 27 February 2026 01:06:36 +0000 (0:00:12.241) 0:02:03.345 ******* 2026-02-27 01:07:11.806451 | orchestrator | 2026-02-27 01:07:11.806461 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-27 01:07:11.806470 | orchestrator | Friday 27 February 2026 01:06:36 +0000 (0:00:00.061) 0:02:03.407 ******* 2026-02-27 01:07:11.806480 | orchestrator | 2026-02-27 01:07:11.806489 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-27 01:07:11.806499 | orchestrator | Friday 27 February 2026 01:06:36 +0000 (0:00:00.069) 0:02:03.476 ******* 2026-02-27 01:07:11.806509 | orchestrator | 2026-02-27 01:07:11.806518 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-02-27 01:07:11.806528 | orchestrator | Friday 27 February 2026 01:06:36 +0000 (0:00:00.089) 0:02:03.566 ******* 2026-02-27 01:07:11.806538 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:07:11.806547 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:07:11.806557 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:07:11.806567 | orchestrator | 2026-02-27 01:07:11.806576 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-02-27 01:07:11.806586 | orchestrator | Friday 27 February 2026 01:06:49 +0000 (0:00:12.876) 0:02:16.442 ******* 2026-02-27 01:07:11.806596 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:07:11.806605 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:07:11.806615 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:07:11.806625 | orchestrator | 2026-02-27 01:07:11.806635 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-02-27 01:07:11.806644 | orchestrator | Friday 27 February 2026 01:06:57 +0000 (0:00:08.122) 0:02:24.564 ******* 2026-02-27 01:07:11.806654 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:07:11.806664 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:07:11.806673 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:07:11.806683 | orchestrator | 2026-02-27 01:07:11.806693 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 01:07:11.806703 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-27 01:07:11.806714 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-27 01:07:11.806724 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-27 01:07:11.806733 | orchestrator | 2026-02-27 01:07:11.806743 | orchestrator | 2026-02-27 01:07:11.806753 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 01:07:11.806768 | orchestrator | Friday 27 February 2026 01:07:08 +0000 (0:00:10.439) 0:02:35.004 ******* 2026-02-27 01:07:11.806784 | orchestrator | =============================================================================== 2026-02-27 01:07:11.806794 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 17.30s 2026-02-27 01:07:11.806803 | orchestrator | service-ks-register : barbican | Creating projects --------------------- 17.20s 2026-02-27 01:07:11.806837 | orchestrator | barbican : Restart barbican-api container ------------------------------ 12.88s 2026-02-27 01:07:11.806855 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.24s 2026-02-27 01:07:11.806871 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 11.53s 2026-02-27 01:07:11.806887 | orchestrator | barbican : Restart barbican-worker container --------------------------- 10.44s 2026-02-27 01:07:11.806904 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 8.12s 2026-02-27 01:07:11.806917 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 7.89s 2026-02-27 01:07:11.806927 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 5.45s 2026-02-27 01:07:11.806936 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 5.07s 2026-02-27 01:07:11.806946 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 5.01s 2026-02-27 01:07:11.806955 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.67s 2026-02-27 01:07:11.806965 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 4.41s 2026-02-27 01:07:11.806974 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.22s 2026-02-27 01:07:11.806984 | orchestrator | barbican : Check barbican containers ------------------------------------ 4.12s 2026-02-27 01:07:11.806993 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 3.17s 2026-02-27 01:07:11.807003 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.66s 2026-02-27 01:07:11.807012 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.55s 2026-02-27 01:07:11.807022 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.45s 2026-02-27 01:07:11.807031 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 2.06s 2026-02-27 01:07:11.807041 | orchestrator | 2026-02-27 01:07:11 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:07:14.839292 | orchestrator | 2026-02-27 01:07:14 | INFO  | Task 5967a692-43e0-454d-baf1-9d1efa894588 is in state STARTED 2026-02-27 01:07:14.839653 | orchestrator | 2026-02-27 01:07:14 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:07:14.841506 | orchestrator | 2026-02-27 01:07:14 | INFO  | Task 2005c553-5f42-4dd9-a0ee-e72606aec97f is in state STARTED 2026-02-27 01:07:14.842306 | orchestrator | 2026-02-27 01:07:14 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:07:14.842435 | orchestrator | 2026-02-27 01:07:14 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:07:17.874226 | orchestrator | 2026-02-27 01:07:17 | INFO  | Task 5967a692-43e0-454d-baf1-9d1efa894588 is in state STARTED 2026-02-27 01:07:17.874686 | orchestrator | 2026-02-27 01:07:17 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:07:17.875495 | orchestrator | 2026-02-27 01:07:17 | INFO  | Task 2005c553-5f42-4dd9-a0ee-e72606aec97f is in state STARTED 2026-02-27 01:07:17.876224 | orchestrator | 2026-02-27 01:07:17 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:07:17.876253 | orchestrator | 2026-02-27 01:07:17 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:07:20.914114 | orchestrator | 2026-02-27 01:07:20 | INFO  | Task 5967a692-43e0-454d-baf1-9d1efa894588 is in state STARTED 2026-02-27 01:07:20.917010 | orchestrator | 2026-02-27 01:07:20 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:07:20.919099 | orchestrator | 2026-02-27 01:07:20 | INFO  | Task 2005c553-5f42-4dd9-a0ee-e72606aec97f is in state STARTED 2026-02-27 01:07:20.920636 | orchestrator | 2026-02-27 01:07:20 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:07:20.921403 | orchestrator | 2026-02-27 01:07:20 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:07:23.974382 | orchestrator | 2026-02-27 01:07:23 | INFO  | Task 5967a692-43e0-454d-baf1-9d1efa894588 is in state STARTED 2026-02-27 01:07:23.976915 | orchestrator | 2026-02-27 01:07:23 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:07:23.981336 | orchestrator | 2026-02-27 01:07:23 | INFO  | Task 2005c553-5f42-4dd9-a0ee-e72606aec97f is in state STARTED 2026-02-27 01:07:23.983963 | orchestrator | 2026-02-27 01:07:23 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:07:23.984264 | orchestrator | 2026-02-27 01:07:23 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:07:27.042760 | orchestrator | 2026-02-27 01:07:27 | INFO  | Task 5967a692-43e0-454d-baf1-9d1efa894588 is in state STARTED 2026-02-27 01:07:27.048420 | orchestrator | 2026-02-27 01:07:27 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:07:27.048515 | orchestrator | 2026-02-27 01:07:27 | INFO  | Task 2005c553-5f42-4dd9-a0ee-e72606aec97f is in state STARTED 2026-02-27 01:07:27.051159 | orchestrator | 2026-02-27 01:07:27 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:07:27.051437 | orchestrator | 2026-02-27 01:07:27 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:07:30.094895 | orchestrator | 2026-02-27 01:07:30 | INFO  | Task 5967a692-43e0-454d-baf1-9d1efa894588 is in state STARTED 2026-02-27 01:07:30.097516 | orchestrator | 2026-02-27 01:07:30 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:07:30.103100 | orchestrator | 2026-02-27 01:07:30 | INFO  | Task 2005c553-5f42-4dd9-a0ee-e72606aec97f is in state STARTED 2026-02-27 01:07:30.105088 | orchestrator | 2026-02-27 01:07:30 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:07:30.105559 | orchestrator | 2026-02-27 01:07:30 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:07:33.161071 | orchestrator | 2026-02-27 01:07:33 | INFO  | Task 5967a692-43e0-454d-baf1-9d1efa894588 is in state STARTED 2026-02-27 01:07:33.164089 | orchestrator | 2026-02-27 01:07:33 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:07:33.166646 | orchestrator | 2026-02-27 01:07:33 | INFO  | Task 2005c553-5f42-4dd9-a0ee-e72606aec97f is in state STARTED 2026-02-27 01:07:33.169911 | orchestrator | 2026-02-27 01:07:33 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:07:33.169967 | orchestrator | 2026-02-27 01:07:33 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:07:36.214214 | orchestrator | 2026-02-27 01:07:36 | INFO  | Task 5967a692-43e0-454d-baf1-9d1efa894588 is in state STARTED 2026-02-27 01:07:36.214626 | orchestrator | 2026-02-27 01:07:36 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:07:36.216135 | orchestrator | 2026-02-27 01:07:36 | INFO  | Task 2005c553-5f42-4dd9-a0ee-e72606aec97f is in state STARTED 2026-02-27 01:07:36.218403 | orchestrator | 2026-02-27 01:07:36 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:07:36.218477 | orchestrator | 2026-02-27 01:07:36 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:07:39.267675 | orchestrator | 2026-02-27 01:07:39 | INFO  | Task 5967a692-43e0-454d-baf1-9d1efa894588 is in state STARTED 2026-02-27 01:07:39.269811 | orchestrator | 2026-02-27 01:07:39 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:07:39.271416 | orchestrator | 2026-02-27 01:07:39 | INFO  | Task 2005c553-5f42-4dd9-a0ee-e72606aec97f is in state STARTED 2026-02-27 01:07:39.273220 | orchestrator | 2026-02-27 01:07:39 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:07:39.273261 | orchestrator | 2026-02-27 01:07:39 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:07:42.315058 | orchestrator | 2026-02-27 01:07:42 | INFO  | Task 5967a692-43e0-454d-baf1-9d1efa894588 is in state STARTED 2026-02-27 01:07:42.315331 | orchestrator | 2026-02-27 01:07:42 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:07:42.315918 | orchestrator | 2026-02-27 01:07:42 | INFO  | Task 2005c553-5f42-4dd9-a0ee-e72606aec97f is in state STARTED 2026-02-27 01:07:42.316898 | orchestrator | 2026-02-27 01:07:42 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:07:42.316922 | orchestrator | 2026-02-27 01:07:42 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:07:45.358741 | orchestrator | 2026-02-27 01:07:45 | INFO  | Task 5967a692-43e0-454d-baf1-9d1efa894588 is in state STARTED 2026-02-27 01:07:45.361546 | orchestrator | 2026-02-27 01:07:45 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:07:45.364012 | orchestrator | 2026-02-27 01:07:45 | INFO  | Task 2005c553-5f42-4dd9-a0ee-e72606aec97f is in state STARTED 2026-02-27 01:07:45.367416 | orchestrator | 2026-02-27 01:07:45 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:07:45.367506 | orchestrator | 2026-02-27 01:07:45 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:07:48.419569 | orchestrator | 2026-02-27 01:07:48 | INFO  | Task 5967a692-43e0-454d-baf1-9d1efa894588 is in state STARTED 2026-02-27 01:07:48.422116 | orchestrator | 2026-02-27 01:07:48 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:07:48.424504 | orchestrator | 2026-02-27 01:07:48 | INFO  | Task 2005c553-5f42-4dd9-a0ee-e72606aec97f is in state STARTED 2026-02-27 01:07:48.428079 | orchestrator | 2026-02-27 01:07:48 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:07:48.428150 | orchestrator | 2026-02-27 01:07:48 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:07:51.477747 | orchestrator | 2026-02-27 01:07:51 | INFO  | Task 5967a692-43e0-454d-baf1-9d1efa894588 is in state STARTED 2026-02-27 01:07:51.479549 | orchestrator | 2026-02-27 01:07:51 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:07:51.483475 | orchestrator | 2026-02-27 01:07:51 | INFO  | Task 2005c553-5f42-4dd9-a0ee-e72606aec97f is in state STARTED 2026-02-27 01:07:51.486689 | orchestrator | 2026-02-27 01:07:51 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:07:51.486845 | orchestrator | 2026-02-27 01:07:51 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:07:54.536169 | orchestrator | 2026-02-27 01:07:54 | INFO  | Task 5967a692-43e0-454d-baf1-9d1efa894588 is in state STARTED 2026-02-27 01:07:54.538255 | orchestrator | 2026-02-27 01:07:54 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:07:54.539620 | orchestrator | 2026-02-27 01:07:54 | INFO  | Task 2005c553-5f42-4dd9-a0ee-e72606aec97f is in state STARTED 2026-02-27 01:07:54.540993 | orchestrator | 2026-02-27 01:07:54 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state STARTED 2026-02-27 01:07:54.541073 | orchestrator | 2026-02-27 01:07:54 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:07:57.592099 | orchestrator | 2026-02-27 01:07:57 | INFO  | Task 5967a692-43e0-454d-baf1-9d1efa894588 is in state STARTED 2026-02-27 01:07:57.593595 | orchestrator | 2026-02-27 01:07:57 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:07:57.596154 | orchestrator | 2026-02-27 01:07:57 | INFO  | Task 2005c553-5f42-4dd9-a0ee-e72606aec97f is in state STARTED 2026-02-27 01:07:57.600129 | orchestrator | 2026-02-27 01:07:57.600251 | orchestrator | 2026-02-27 01:07:57 | INFO  | Task 1e1b6230-35d7-48bc-a0b6-318f3073c09b is in state SUCCESS 2026-02-27 01:07:57.600954 | orchestrator | 2026-02-27 01:07:57 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:07:57.602574 | orchestrator | 2026-02-27 01:07:57.602614 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-27 01:07:57.602634 | orchestrator | 2026-02-27 01:07:57.602654 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-27 01:07:57.602673 | orchestrator | Friday 27 February 2026 01:04:34 +0000 (0:00:00.565) 0:00:00.566 ******* 2026-02-27 01:07:57.602693 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:07:57.602712 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:07:57.602728 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:07:57.602739 | orchestrator | 2026-02-27 01:07:57.602751 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-27 01:07:57.602761 | orchestrator | Friday 27 February 2026 01:04:34 +0000 (0:00:00.480) 0:00:01.046 ******* 2026-02-27 01:07:57.602773 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-02-27 01:07:57.602784 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-02-27 01:07:57.602794 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-02-27 01:07:57.602805 | orchestrator | 2026-02-27 01:07:57.602816 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-02-27 01:07:57.602955 | orchestrator | 2026-02-27 01:07:57.603016 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-27 01:07:57.603036 | orchestrator | Friday 27 February 2026 01:04:35 +0000 (0:00:00.665) 0:00:01.711 ******* 2026-02-27 01:07:57.603136 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:07:57.603159 | orchestrator | 2026-02-27 01:07:57.603177 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-02-27 01:07:57.603196 | orchestrator | Friday 27 February 2026 01:04:36 +0000 (0:00:00.823) 0:00:02.535 ******* 2026-02-27 01:07:57.603215 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-02-27 01:07:57.603234 | orchestrator | 2026-02-27 01:07:57.603253 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-02-27 01:07:57.603274 | orchestrator | Friday 27 February 2026 01:04:40 +0000 (0:00:04.344) 0:00:06.880 ******* 2026-02-27 01:07:57.603295 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-02-27 01:07:57.603315 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-02-27 01:07:57.603332 | orchestrator | 2026-02-27 01:07:57.603344 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-02-27 01:07:57.603358 | orchestrator | Friday 27 February 2026 01:04:48 +0000 (0:00:07.780) 0:00:14.661 ******* 2026-02-27 01:07:57.603383 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-02-27 01:07:57.603395 | orchestrator | 2026-02-27 01:07:57.603438 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-02-27 01:07:57.603452 | orchestrator | Friday 27 February 2026 01:04:52 +0000 (0:00:03.758) 0:00:18.419 ******* 2026-02-27 01:07:57.603469 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-27 01:07:57.603768 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-02-27 01:07:57.603792 | orchestrator | 2026-02-27 01:07:57.603803 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-02-27 01:07:57.603814 | orchestrator | Friday 27 February 2026 01:04:56 +0000 (0:00:04.568) 0:00:22.987 ******* 2026-02-27 01:07:57.603825 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-27 01:07:57.603836 | orchestrator | 2026-02-27 01:07:57.603846 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-02-27 01:07:57.603884 | orchestrator | Friday 27 February 2026 01:05:00 +0000 (0:00:03.908) 0:00:26.896 ******* 2026-02-27 01:07:57.603896 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-02-27 01:07:57.603907 | orchestrator | 2026-02-27 01:07:57.603918 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-02-27 01:07:57.603929 | orchestrator | Friday 27 February 2026 01:05:04 +0000 (0:00:04.160) 0:00:31.056 ******* 2026-02-27 01:07:57.603943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-27 01:07:57.603979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-27 01:07:57.603992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-27 01:07:57.604005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-27 01:07:57.604031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-27 01:07:57.604043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-27 01:07:57.604055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.604074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.604123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.604173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.604199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.604211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.604222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.604240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.604252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.604264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.604281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.604296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.604308 | orchestrator | 2026-02-27 01:07:57.604320 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-02-27 01:07:57.604331 | orchestrator | Friday 27 February 2026 01:05:08 +0000 (0:00:03.386) 0:00:34.443 ******* 2026-02-27 01:07:57.604342 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:07:57.604353 | orchestrator | 2026-02-27 01:07:57.604364 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-02-27 01:07:57.604376 | orchestrator | Friday 27 February 2026 01:05:08 +0000 (0:00:00.149) 0:00:34.593 ******* 2026-02-27 01:07:57.604395 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:07:57.604413 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:07:57.604431 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:07:57.604450 | orchestrator | 2026-02-27 01:07:57.604467 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-27 01:07:57.604487 | orchestrator | Friday 27 February 2026 01:05:08 +0000 (0:00:00.305) 0:00:34.898 ******* 2026-02-27 01:07:57.604505 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:07:57.604525 | orchestrator | 2026-02-27 01:07:57.604544 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-02-27 01:07:57.604556 | orchestrator | Friday 27 February 2026 01:05:09 +0000 (0:00:00.644) 0:00:35.543 ******* 2026-02-27 01:07:57.604576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-27 01:07:57.604588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-27 01:07:57.604609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-27 01:07:57.604626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-27 01:07:57.604638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-27 01:07:57.604650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-27 01:07:57.604668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.604679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.604704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.604730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.604750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.604769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.604788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.604818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.604851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.604897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.604926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.604947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.604966 | orchestrator | 2026-02-27 01:07:57.604985 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-02-27 01:07:57.605004 | orchestrator | Friday 27 February 2026 01:05:15 +0000 (0:00:06.201) 0:00:41.744 ******* 2026-02-27 01:07:57.605024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-27 01:07:57.605054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-27 01:07:57.605094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.605115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.605143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.605156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.605168 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:07:57.605179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-27 01:07:57.605825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-27 01:07:57.605914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.605929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.605949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.605961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.605972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-27 01:07:57.606002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-27 01:07:57.606060 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:07:57.606073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.606085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.606102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.606114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.606125 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:07:57.606137 | orchestrator | 2026-02-27 01:07:57.606148 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-02-27 01:07:57.606159 | orchestrator | Friday 27 February 2026 01:05:16 +0000 (0:00:00.791) 0:00:42.536 ******* 2026-02-27 01:07:57.606170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-27 01:07:57.606195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-27 01:07:57.606207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.606219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.606235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.606271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.606284 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:07:57.606295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-27 01:07:57.606319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-27 01:07:57.606331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.606343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.606358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.606370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.606382 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:07:57.606393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-27 01:07:57.606416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-27 01:07:57.606431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.606443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.606461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.606474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.606494 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:07:57.606506 | orchestrator | 2026-02-27 01:07:57.606519 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-02-27 01:07:57.606531 | orchestrator | Friday 27 February 2026 01:05:18 +0000 (0:00:02.057) 0:00:44.594 ******* 2026-02-27 01:07:57.606545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-27 01:07:57.606565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-27 01:07:57.606580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-27 01:07:57.606597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-27 01:07:57.606611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-27 01:07:57.606630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-27 01:07:57.606650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.606664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.606678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.606691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.606713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.606732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.606746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.606767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.606779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.606790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.606801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.606817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.606834 | orchestrator | 2026-02-27 01:07:57.606845 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-02-27 01:07:57.606875 | orchestrator | Friday 27 February 2026 01:05:25 +0000 (0:00:07.080) 0:00:51.674 ******* 2026-02-27 01:07:57.606887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-27 01:07:57.606904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-27 01:07:57.606916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-27 01:07:57.606928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-27 01:07:57.606951 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-27 01:07:57.606963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-27 01:07:57.606974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.606991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.607003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.607015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.607031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.607048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.607059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.607077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.607088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.607100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.607111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.607132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.607143 | orchestrator | 2026-02-27 01:07:57.607154 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-02-27 01:07:57.607165 | orchestrator | Friday 27 February 2026 01:05:51 +0000 (0:00:26.550) 0:01:18.225 ******* 2026-02-27 01:07:57.607176 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-27 01:07:57.607188 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-27 01:07:57.607198 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-27 01:07:57.607209 | orchestrator | 2026-02-27 01:07:57.607220 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-02-27 01:07:57.607231 | orchestrator | Friday 27 February 2026 01:06:02 +0000 (0:00:10.514) 0:01:28.740 ******* 2026-02-27 01:07:57.607242 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-27 01:07:57.607252 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-27 01:07:57.607263 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-27 01:07:57.607274 | orchestrator | 2026-02-27 01:07:57.607285 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-02-27 01:07:57.607295 | orchestrator | Friday 27 February 2026 01:06:06 +0000 (0:00:04.589) 0:01:33.330 ******* 2026-02-27 01:07:57.607313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-27 01:07:57.607325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-27 01:07:57.607354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-27 01:07:57.607366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-27 01:07:57.607377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.607389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.607406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.607418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-27 01:07:57.607436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.607451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.607463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.607474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-27 01:07:57.607492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.607504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.607520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.607532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.607551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.607563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.607574 | orchestrator | 2026-02-27 01:07:57.607585 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-02-27 01:07:57.607596 | orchestrator | Friday 27 February 2026 01:06:10 +0000 (0:00:03.857) 0:01:37.188 ******* 2026-02-27 01:07:57.607614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-27 01:07:57.607627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-27 01:07:57.607645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-27 01:07:57.607660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-27 01:07:57.607672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.607684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.607700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.607718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-27 01:07:57.607729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.607744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.607756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.607768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-27 01:07:57.607827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.607840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.607874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.607886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.607902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.607913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.607924 | orchestrator | 2026-02-27 01:07:57.607936 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-27 01:07:57.607948 | orchestrator | Friday 27 February 2026 01:06:15 +0000 (0:00:04.167) 0:01:41.355 ******* 2026-02-27 01:07:57.607960 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:07:57.607971 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:07:57.607982 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:07:57.607993 | orchestrator | 2026-02-27 01:07:57.608004 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-02-27 01:07:57.608015 | orchestrator | Friday 27 February 2026 01:06:15 +0000 (0:00:00.612) 0:01:41.968 ******* 2026-02-27 01:07:57.608033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-27 01:07:57.608052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-27 01:07:57.608073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.608108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.608134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.608154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.608173 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:07:57.608202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-27 01:07:57.608236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-27 01:07:57.608257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.608284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.608302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.608315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.608327 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:07:57.608344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-27 01:07:57.608363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-27 01:07:57.608375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.608386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.608506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.608534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-27 01:07:57.608552 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:07:57.608563 | orchestrator | 2026-02-27 01:07:57.608575 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-02-27 01:07:57.608586 | orchestrator | Friday 27 February 2026 01:06:17 +0000 (0:00:01.616) 0:01:43.584 ******* 2026-02-27 01:07:57.608606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-27 01:07:57.608619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-27 01:07:57.608630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-27 01:07:57.608646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-27 01:07:57.608657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-27 01:07:57.608678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-27 01:07:57.608695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.608707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.608719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.608734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.608746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.608764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.608780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.608792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.608803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.608815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.608830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.608842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:07:57.608957 | orchestrator | 2026-02-27 01:07:57.608972 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-27 01:07:57.608983 | orchestrator | Friday 27 February 2026 01:06:22 +0000 (0:00:05.716) 0:01:49.301 ******* 2026-02-27 01:07:57.608995 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:07:57.609006 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:07:57.609017 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:07:57.609027 | orchestrator | 2026-02-27 01:07:57.609038 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-02-27 01:07:57.609049 | orchestrator | Friday 27 February 2026 01:06:23 +0000 (0:00:00.557) 0:01:49.859 ******* 2026-02-27 01:07:57.609060 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-02-27 01:07:57.609071 | orchestrator | 2026-02-27 01:07:57.609090 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-02-27 01:07:57.609110 | orchestrator | Friday 27 February 2026 01:06:25 +0000 (0:00:02.388) 0:01:52.247 ******* 2026-02-27 01:07:57.609130 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-27 01:07:57.609149 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-02-27 01:07:57.609165 | orchestrator | 2026-02-27 01:07:57.609177 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-02-27 01:07:57.609194 | orchestrator | Friday 27 February 2026 01:06:28 +0000 (0:00:02.438) 0:01:54.686 ******* 2026-02-27 01:07:57.609204 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:07:57.609213 | orchestrator | 2026-02-27 01:07:57.609223 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-27 01:07:57.609232 | orchestrator | Friday 27 February 2026 01:06:44 +0000 (0:00:16.002) 0:02:10.688 ******* 2026-02-27 01:07:57.609242 | orchestrator | 2026-02-27 01:07:57.609252 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-27 01:07:57.609262 | orchestrator | Friday 27 February 2026 01:06:44 +0000 (0:00:00.063) 0:02:10.752 ******* 2026-02-27 01:07:57.609271 | orchestrator | 2026-02-27 01:07:57.609281 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-27 01:07:57.609290 | orchestrator | Friday 27 February 2026 01:06:44 +0000 (0:00:00.069) 0:02:10.821 ******* 2026-02-27 01:07:57.609300 | orchestrator | 2026-02-27 01:07:57.609309 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-02-27 01:07:57.609318 | orchestrator | Friday 27 February 2026 01:06:44 +0000 (0:00:00.068) 0:02:10.890 ******* 2026-02-27 01:07:57.609328 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:07:57.609337 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:07:57.609347 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:07:57.609356 | orchestrator | 2026-02-27 01:07:57.609366 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-02-27 01:07:57.609376 | orchestrator | Friday 27 February 2026 01:06:57 +0000 (0:00:13.328) 0:02:24.219 ******* 2026-02-27 01:07:57.609385 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:07:57.609395 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:07:57.609404 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:07:57.609414 | orchestrator | 2026-02-27 01:07:57.609423 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-02-27 01:07:57.609433 | orchestrator | Friday 27 February 2026 01:07:11 +0000 (0:00:14.011) 0:02:38.230 ******* 2026-02-27 01:07:57.609443 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:07:57.609452 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:07:57.609462 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:07:57.609478 | orchestrator | 2026-02-27 01:07:57.609487 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-02-27 01:07:57.609497 | orchestrator | Friday 27 February 2026 01:07:26 +0000 (0:00:14.912) 0:02:53.143 ******* 2026-02-27 01:07:57.609506 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:07:57.609516 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:07:57.609525 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:07:57.609535 | orchestrator | 2026-02-27 01:07:57.609544 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-02-27 01:07:57.609554 | orchestrator | Friday 27 February 2026 01:07:35 +0000 (0:00:08.789) 0:03:01.932 ******* 2026-02-27 01:07:57.609564 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:07:57.609573 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:07:57.609583 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:07:57.609593 | orchestrator | 2026-02-27 01:07:57.609610 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-02-27 01:07:57.609634 | orchestrator | Friday 27 February 2026 01:07:41 +0000 (0:00:05.870) 0:03:07.803 ******* 2026-02-27 01:07:57.609652 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:07:57.609675 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:07:57.609691 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:07:57.609708 | orchestrator | 2026-02-27 01:07:57.609725 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-02-27 01:07:57.609742 | orchestrator | Friday 27 February 2026 01:07:47 +0000 (0:00:06.170) 0:03:13.973 ******* 2026-02-27 01:07:57.609759 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:07:57.609769 | orchestrator | 2026-02-27 01:07:57.609778 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 01:07:57.609789 | orchestrator | testbed-node-0 : ok=29  changed=24  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-27 01:07:57.609799 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-27 01:07:57.609809 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-27 01:07:57.609818 | orchestrator | 2026-02-27 01:07:57.609828 | orchestrator | 2026-02-27 01:07:57.609837 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 01:07:57.609847 | orchestrator | Friday 27 February 2026 01:07:56 +0000 (0:00:08.705) 0:03:22.679 ******* 2026-02-27 01:07:57.609909 | orchestrator | =============================================================================== 2026-02-27 01:07:57.609921 | orchestrator | designate : Copying over designate.conf -------------------------------- 26.55s 2026-02-27 01:07:57.609931 | orchestrator | designate : Running Designate bootstrap container ---------------------- 16.00s 2026-02-27 01:07:57.609941 | orchestrator | designate : Restart designate-central container ------------------------ 14.91s 2026-02-27 01:07:57.609950 | orchestrator | designate : Restart designate-api container ---------------------------- 14.01s 2026-02-27 01:07:57.609960 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 13.33s 2026-02-27 01:07:57.609969 | orchestrator | designate : Copying over pools.yaml ------------------------------------ 10.51s 2026-02-27 01:07:57.609979 | orchestrator | designate : Restart designate-producer container ------------------------ 8.79s 2026-02-27 01:07:57.609988 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 8.71s 2026-02-27 01:07:57.609998 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.78s 2026-02-27 01:07:57.610008 | orchestrator | designate : Copying over config.json files for services ----------------- 7.08s 2026-02-27 01:07:57.610074 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.20s 2026-02-27 01:07:57.610085 | orchestrator | designate : Restart designate-worker container -------------------------- 6.17s 2026-02-27 01:07:57.610104 | orchestrator | designate : Restart designate-mdns container ---------------------------- 5.87s 2026-02-27 01:07:57.610114 | orchestrator | designate : Check designate containers ---------------------------------- 5.72s 2026-02-27 01:07:57.610124 | orchestrator | designate : Copying over named.conf ------------------------------------- 4.59s 2026-02-27 01:07:57.610133 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.57s 2026-02-27 01:07:57.610143 | orchestrator | service-ks-register : designate | Creating services --------------------- 4.34s 2026-02-27 01:07:57.610152 | orchestrator | designate : Copying over rndc.key --------------------------------------- 4.17s 2026-02-27 01:07:57.610162 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.16s 2026-02-27 01:07:57.610172 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.91s 2026-02-27 01:08:00.652522 | orchestrator | 2026-02-27 01:08:00 | INFO  | Task 5967a692-43e0-454d-baf1-9d1efa894588 is in state STARTED 2026-02-27 01:08:00.654240 | orchestrator | 2026-02-27 01:08:00 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:08:00.655279 | orchestrator | 2026-02-27 01:08:00 | INFO  | Task 2288f86f-7614-4d0e-b0ae-75b3c286a560 is in state STARTED 2026-02-27 01:08:00.657415 | orchestrator | 2026-02-27 01:08:00 | INFO  | Task 2005c553-5f42-4dd9-a0ee-e72606aec97f is in state STARTED 2026-02-27 01:08:00.657492 | orchestrator | 2026-02-27 01:08:00 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:08:03.686539 | orchestrator | 2026-02-27 01:08:03 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:08:03.687212 | orchestrator | 2026-02-27 01:08:03 | INFO  | Task 5967a692-43e0-454d-baf1-9d1efa894588 is in state STARTED 2026-02-27 01:08:03.688070 | orchestrator | 2026-02-27 01:08:03 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:08:03.689303 | orchestrator | 2026-02-27 01:08:03 | INFO  | Task 2288f86f-7614-4d0e-b0ae-75b3c286a560 is in state STARTED 2026-02-27 01:08:03.690800 | orchestrator | 2026-02-27 01:08:03 | INFO  | Task 2005c553-5f42-4dd9-a0ee-e72606aec97f is in state SUCCESS 2026-02-27 01:08:03.693074 | orchestrator | 2026-02-27 01:08:03.693129 | orchestrator | 2026-02-27 01:08:03.693142 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-27 01:08:03.693155 | orchestrator | 2026-02-27 01:08:03.693166 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-27 01:08:03.693194 | orchestrator | Friday 27 February 2026 01:06:40 +0000 (0:00:00.286) 0:00:00.286 ******* 2026-02-27 01:08:03.693206 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:08:03.693218 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:08:03.693229 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:08:03.693240 | orchestrator | 2026-02-27 01:08:03.693251 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-27 01:08:03.693262 | orchestrator | Friday 27 February 2026 01:06:40 +0000 (0:00:00.438) 0:00:00.724 ******* 2026-02-27 01:08:03.693273 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-02-27 01:08:03.693285 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-02-27 01:08:03.693296 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-02-27 01:08:03.693306 | orchestrator | 2026-02-27 01:08:03.693317 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-02-27 01:08:03.693328 | orchestrator | 2026-02-27 01:08:03.693338 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-27 01:08:03.693349 | orchestrator | Friday 27 February 2026 01:06:41 +0000 (0:00:00.755) 0:00:01.480 ******* 2026-02-27 01:08:03.693360 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:08:03.693371 | orchestrator | 2026-02-27 01:08:03.693382 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-02-27 01:08:03.693417 | orchestrator | Friday 27 February 2026 01:06:42 +0000 (0:00:00.673) 0:00:02.153 ******* 2026-02-27 01:08:03.693429 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-02-27 01:08:03.693439 | orchestrator | 2026-02-27 01:08:03.693450 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-02-27 01:08:03.693461 | orchestrator | Friday 27 February 2026 01:06:45 +0000 (0:00:03.731) 0:00:05.886 ******* 2026-02-27 01:08:03.693472 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-02-27 01:08:03.693483 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-02-27 01:08:03.693495 | orchestrator | 2026-02-27 01:08:03.693505 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-02-27 01:08:03.693516 | orchestrator | Friday 27 February 2026 01:06:52 +0000 (0:00:06.700) 0:00:12.587 ******* 2026-02-27 01:08:03.693527 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-27 01:08:03.693542 | orchestrator | 2026-02-27 01:08:03.693560 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-02-27 01:08:03.693583 | orchestrator | Friday 27 February 2026 01:06:55 +0000 (0:00:03.119) 0:00:15.706 ******* 2026-02-27 01:08:03.693611 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-27 01:08:03.693628 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-02-27 01:08:03.693645 | orchestrator | 2026-02-27 01:08:03.693662 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-02-27 01:08:03.693680 | orchestrator | Friday 27 February 2026 01:06:59 +0000 (0:00:03.955) 0:00:19.661 ******* 2026-02-27 01:08:03.693699 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-27 01:08:03.693717 | orchestrator | 2026-02-27 01:08:03.693736 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-02-27 01:08:03.693754 | orchestrator | Friday 27 February 2026 01:07:04 +0000 (0:00:04.402) 0:00:24.063 ******* 2026-02-27 01:08:03.693771 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-02-27 01:08:03.693788 | orchestrator | 2026-02-27 01:08:03.693799 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-27 01:08:03.693810 | orchestrator | Friday 27 February 2026 01:07:08 +0000 (0:00:04.493) 0:00:28.557 ******* 2026-02-27 01:08:03.693821 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:08:03.693832 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:08:03.693843 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:08:03.693853 | orchestrator | 2026-02-27 01:08:03.693893 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-02-27 01:08:03.693904 | orchestrator | Friday 27 February 2026 01:07:09 +0000 (0:00:00.526) 0:00:29.084 ******* 2026-02-27 01:08:03.693919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-27 01:08:03.693960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-27 01:08:03.693986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-27 01:08:03.693998 | orchestrator | 2026-02-27 01:08:03.694009 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-02-27 01:08:03.694072 | orchestrator | Friday 27 February 2026 01:07:10 +0000 (0:00:01.675) 0:00:30.760 ******* 2026-02-27 01:08:03.694084 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:08:03.694095 | orchestrator | 2026-02-27 01:08:03.694106 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-02-27 01:08:03.694117 | orchestrator | Friday 27 February 2026 01:07:10 +0000 (0:00:00.168) 0:00:30.928 ******* 2026-02-27 01:08:03.694128 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:08:03.694139 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:08:03.694150 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:08:03.694161 | orchestrator | 2026-02-27 01:08:03.694171 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-27 01:08:03.694182 | orchestrator | Friday 27 February 2026 01:07:11 +0000 (0:00:00.633) 0:00:31.562 ******* 2026-02-27 01:08:03.694193 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:08:03.694204 | orchestrator | 2026-02-27 01:08:03.694215 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-02-27 01:08:03.694226 | orchestrator | Friday 27 February 2026 01:07:13 +0000 (0:00:01.601) 0:00:33.164 ******* 2026-02-27 01:08:03.694238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-27 01:08:03.694265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-27 01:08:03.694286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-27 01:08:03.694297 | orchestrator | 2026-02-27 01:08:03.694308 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-02-27 01:08:03.694319 | orchestrator | Friday 27 February 2026 01:07:16 +0000 (0:00:03.005) 0:00:36.169 ******* 2026-02-27 01:08:03.694331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-27 01:08:03.694342 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:08:03.694354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-27 01:08:03.694365 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:08:03.694390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-27 01:08:03.694402 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:08:03.694413 | orchestrator | 2026-02-27 01:08:03.694428 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-02-27 01:08:03.694440 | orchestrator | Friday 27 February 2026 01:07:17 +0000 (0:00:01.421) 0:00:37.590 ******* 2026-02-27 01:08:03.694451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-27 01:08:03.694462 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:08:03.694474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-27 01:08:03.694485 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:08:03.694496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-27 01:08:03.694513 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:08:03.694524 | orchestrator | 2026-02-27 01:08:03.694535 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-02-27 01:08:03.694546 | orchestrator | Friday 27 February 2026 01:07:18 +0000 (0:00:00.960) 0:00:38.551 ******* 2026-02-27 01:08:03.694571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-27 01:08:03.694583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-27 01:08:03.694595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-27 01:08:03.694607 | orchestrator | 2026-02-27 01:08:03.694618 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-02-27 01:08:03.694629 | orchestrator | Friday 27 February 2026 01:07:20 +0000 (0:00:01.467) 0:00:40.018 ******* 2026-02-27 01:08:03.694640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-27 01:08:03.694658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-27 01:08:03.694682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-27 01:08:03.694694 | orchestrator | 2026-02-27 01:08:03.694705 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-02-27 01:08:03.694716 | orchestrator | Friday 27 February 2026 01:07:22 +0000 (0:00:02.422) 0:00:42.441 ******* 2026-02-27 01:08:03.694727 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-27 01:08:03.694738 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-27 01:08:03.694750 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-27 01:08:03.694760 | orchestrator | 2026-02-27 01:08:03.694771 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-02-27 01:08:03.694782 | orchestrator | Friday 27 February 2026 01:07:24 +0000 (0:00:01.630) 0:00:44.071 ******* 2026-02-27 01:08:03.694793 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:08:03.694804 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:08:03.694815 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:08:03.694826 | orchestrator | 2026-02-27 01:08:03.694837 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-02-27 01:08:03.694848 | orchestrator | Friday 27 February 2026 01:07:25 +0000 (0:00:01.420) 0:00:45.492 ******* 2026-02-27 01:08:03.694888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-27 01:08:03.694930 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:08:03.694947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-27 01:08:03.694966 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:08:03.695001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-27 01:08:03.695021 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:08:03.695039 | orchestrator | 2026-02-27 01:08:03.695057 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-02-27 01:08:03.695072 | orchestrator | Friday 27 February 2026 01:07:26 +0000 (0:00:00.532) 0:00:46.024 ******* 2026-02-27 01:08:03.695091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-27 01:08:03.695111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-27 01:08:03.695141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-27 01:08:03.695161 | orchestrator | 2026-02-27 01:08:03.695181 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-02-27 01:08:03.695200 | orchestrator | Friday 27 February 2026 01:07:27 +0000 (0:00:01.153) 0:00:47.178 ******* 2026-02-27 01:08:03.695218 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:08:03.695229 | orchestrator | 2026-02-27 01:08:03.695240 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-02-27 01:08:03.695251 | orchestrator | Friday 27 February 2026 01:07:30 +0000 (0:00:02.912) 0:00:50.091 ******* 2026-02-27 01:08:03.695262 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:08:03.695272 | orchestrator | 2026-02-27 01:08:03.695283 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-02-27 01:08:03.695294 | orchestrator | Friday 27 February 2026 01:07:32 +0000 (0:00:02.753) 0:00:52.845 ******* 2026-02-27 01:08:03.695313 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:08:03.695324 | orchestrator | 2026-02-27 01:08:03.695335 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-27 01:08:03.695346 | orchestrator | Friday 27 February 2026 01:07:49 +0000 (0:00:16.843) 0:01:09.688 ******* 2026-02-27 01:08:03.695357 | orchestrator | 2026-02-27 01:08:03.695374 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-27 01:08:03.695385 | orchestrator | Friday 27 February 2026 01:07:49 +0000 (0:00:00.073) 0:01:09.762 ******* 2026-02-27 01:08:03.695396 | orchestrator | 2026-02-27 01:08:03.695407 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-27 01:08:03.695418 | orchestrator | Friday 27 February 2026 01:07:49 +0000 (0:00:00.070) 0:01:09.832 ******* 2026-02-27 01:08:03.695428 | orchestrator | 2026-02-27 01:08:03.695439 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-02-27 01:08:03.695450 | orchestrator | Friday 27 February 2026 01:07:49 +0000 (0:00:00.074) 0:01:09.906 ******* 2026-02-27 01:08:03.695461 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:08:03.695472 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:08:03.695483 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:08:03.695494 | orchestrator | 2026-02-27 01:08:03.695504 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 01:08:03.695516 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-27 01:08:03.695528 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-27 01:08:03.695546 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-27 01:08:03.695557 | orchestrator | 2026-02-27 01:08:03.695568 | orchestrator | 2026-02-27 01:08:03.695579 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 01:08:03.695590 | orchestrator | Friday 27 February 2026 01:08:00 +0000 (0:00:10.466) 0:01:20.373 ******* 2026-02-27 01:08:03.695600 | orchestrator | =============================================================================== 2026-02-27 01:08:03.695611 | orchestrator | placement : Running placement bootstrap container ---------------------- 16.84s 2026-02-27 01:08:03.695622 | orchestrator | placement : Restart placement-api container ---------------------------- 10.47s 2026-02-27 01:08:03.695633 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.70s 2026-02-27 01:08:03.695643 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.49s 2026-02-27 01:08:03.695659 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 4.40s 2026-02-27 01:08:03.695677 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.96s 2026-02-27 01:08:03.695696 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.73s 2026-02-27 01:08:03.695714 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.12s 2026-02-27 01:08:03.695731 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 3.01s 2026-02-27 01:08:03.695749 | orchestrator | placement : Creating placement databases -------------------------------- 2.91s 2026-02-27 01:08:03.695765 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.75s 2026-02-27 01:08:03.695781 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.42s 2026-02-27 01:08:03.695796 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.68s 2026-02-27 01:08:03.695815 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.63s 2026-02-27 01:08:03.695833 | orchestrator | placement : include_tasks ----------------------------------------------- 1.60s 2026-02-27 01:08:03.695852 | orchestrator | placement : Copying over config.json files for services ----------------- 1.47s 2026-02-27 01:08:03.695943 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 1.42s 2026-02-27 01:08:03.695957 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.42s 2026-02-27 01:08:03.695980 | orchestrator | placement : Check placement containers ---------------------------------- 1.15s 2026-02-27 01:08:03.696009 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.96s 2026-02-27 01:08:03.696026 | orchestrator | 2026-02-27 01:08:03 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:08:06.730365 | orchestrator | 2026-02-27 01:08:06 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:08:06.731036 | orchestrator | 2026-02-27 01:08:06 | INFO  | Task d8da18cf-a47b-4d56-b847-7a774ee86d8e is in state STARTED 2026-02-27 01:08:06.734151 | orchestrator | 2026-02-27 01:08:06 | INFO  | Task 5967a692-43e0-454d-baf1-9d1efa894588 is in state STARTED 2026-02-27 01:08:06.734189 | orchestrator | 2026-02-27 01:08:06 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:08:06.735097 | orchestrator | 2026-02-27 01:08:06 | INFO  | Task 2288f86f-7614-4d0e-b0ae-75b3c286a560 is in state SUCCESS 2026-02-27 01:08:06.735123 | orchestrator | 2026-02-27 01:08:06 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:08:09.767469 | orchestrator | 2026-02-27 01:08:09 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:08:09.768360 | orchestrator | 2026-02-27 01:08:09 | INFO  | Task d8da18cf-a47b-4d56-b847-7a774ee86d8e is in state STARTED 2026-02-27 01:08:09.770270 | orchestrator | 2026-02-27 01:08:09 | INFO  | Task 5967a692-43e0-454d-baf1-9d1efa894588 is in state STARTED 2026-02-27 01:08:09.770704 | orchestrator | 2026-02-27 01:08:09 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:08:09.770829 | orchestrator | 2026-02-27 01:08:09 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:08:12.844617 | orchestrator | 2026-02-27 01:08:12 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:08:12.844728 | orchestrator | 2026-02-27 01:08:12 | INFO  | Task d8da18cf-a47b-4d56-b847-7a774ee86d8e is in state STARTED 2026-02-27 01:08:12.844743 | orchestrator | 2026-02-27 01:08:12 | INFO  | Task 5967a692-43e0-454d-baf1-9d1efa894588 is in state STARTED 2026-02-27 01:08:12.844752 | orchestrator | 2026-02-27 01:08:12 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:08:12.844762 | orchestrator | 2026-02-27 01:08:12 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:08:15.851190 | orchestrator | 2026-02-27 01:08:15 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:08:15.852113 | orchestrator | 2026-02-27 01:08:15 | INFO  | Task d8da18cf-a47b-4d56-b847-7a774ee86d8e is in state STARTED 2026-02-27 01:08:15.853274 | orchestrator | 2026-02-27 01:08:15 | INFO  | Task 5967a692-43e0-454d-baf1-9d1efa894588 is in state STARTED 2026-02-27 01:08:15.853993 | orchestrator | 2026-02-27 01:08:15 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:08:15.854776 | orchestrator | 2026-02-27 01:08:15 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:08:19.244939 | orchestrator | 2026-02-27 01:08:19 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:08:19.245144 | orchestrator | 2026-02-27 01:08:19 | INFO  | Task d8da18cf-a47b-4d56-b847-7a774ee86d8e is in state STARTED 2026-02-27 01:08:19.246482 | orchestrator | 2026-02-27 01:08:19 | INFO  | Task 5967a692-43e0-454d-baf1-9d1efa894588 is in state STARTED 2026-02-27 01:08:19.247507 | orchestrator | 2026-02-27 01:08:19 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:08:19.247545 | orchestrator | 2026-02-27 01:08:19 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:08:22.326160 | orchestrator | 2026-02-27 01:08:22 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:08:22.326332 | orchestrator | 2026-02-27 01:08:22 | INFO  | Task d8da18cf-a47b-4d56-b847-7a774ee86d8e is in state STARTED 2026-02-27 01:08:22.326349 | orchestrator | 2026-02-27 01:08:22 | INFO  | Task 5967a692-43e0-454d-baf1-9d1efa894588 is in state STARTED 2026-02-27 01:08:22.326361 | orchestrator | 2026-02-27 01:08:22 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:08:22.326373 | orchestrator | 2026-02-27 01:08:22 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:08:25.379537 | orchestrator | 2026-02-27 01:08:25 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:08:25.379599 | orchestrator | 2026-02-27 01:08:25 | INFO  | Task d8da18cf-a47b-4d56-b847-7a774ee86d8e is in state STARTED 2026-02-27 01:08:25.379607 | orchestrator | 2026-02-27 01:08:25 | INFO  | Task 5967a692-43e0-454d-baf1-9d1efa894588 is in state STARTED 2026-02-27 01:08:25.379614 | orchestrator | 2026-02-27 01:08:25 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:08:25.379620 | orchestrator | 2026-02-27 01:08:25 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:08:28.403708 | orchestrator | 2026-02-27 01:08:28 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:08:28.404413 | orchestrator | 2026-02-27 01:08:28 | INFO  | Task d8da18cf-a47b-4d56-b847-7a774ee86d8e is in state STARTED 2026-02-27 01:08:28.405425 | orchestrator | 2026-02-27 01:08:28 | INFO  | Task 5967a692-43e0-454d-baf1-9d1efa894588 is in state STARTED 2026-02-27 01:08:28.406451 | orchestrator | 2026-02-27 01:08:28 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:08:28.406570 | orchestrator | 2026-02-27 01:08:28 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:08:31.442852 | orchestrator | 2026-02-27 01:08:31 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:08:31.443701 | orchestrator | 2026-02-27 01:08:31 | INFO  | Task d8da18cf-a47b-4d56-b847-7a774ee86d8e is in state STARTED 2026-02-27 01:08:31.446164 | orchestrator | 2026-02-27 01:08:31 | INFO  | Task 5967a692-43e0-454d-baf1-9d1efa894588 is in state STARTED 2026-02-27 01:08:31.448638 | orchestrator | 2026-02-27 01:08:31 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:08:31.448694 | orchestrator | 2026-02-27 01:08:31 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:08:34.500825 | orchestrator | 2026-02-27 01:08:34 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:08:34.501055 | orchestrator | 2026-02-27 01:08:34 | INFO  | Task d8da18cf-a47b-4d56-b847-7a774ee86d8e is in state STARTED 2026-02-27 01:08:34.502115 | orchestrator | 2026-02-27 01:08:34 | INFO  | Task 5967a692-43e0-454d-baf1-9d1efa894588 is in state STARTED 2026-02-27 01:08:34.502519 | orchestrator | 2026-02-27 01:08:34 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:08:34.502623 | orchestrator | 2026-02-27 01:08:34 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:08:37.546321 | orchestrator | 2026-02-27 01:08:37 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:08:37.549155 | orchestrator | 2026-02-27 01:08:37 | INFO  | Task d8da18cf-a47b-4d56-b847-7a774ee86d8e is in state STARTED 2026-02-27 01:08:37.551258 | orchestrator | 2026-02-27 01:08:37 | INFO  | Task 5967a692-43e0-454d-baf1-9d1efa894588 is in state STARTED 2026-02-27 01:08:37.553410 | orchestrator | 2026-02-27 01:08:37 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:08:37.553459 | orchestrator | 2026-02-27 01:08:37 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:08:40.603149 | orchestrator | 2026-02-27 01:08:40 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:08:40.603722 | orchestrator | 2026-02-27 01:08:40 | INFO  | Task d8da18cf-a47b-4d56-b847-7a774ee86d8e is in state STARTED 2026-02-27 01:08:40.604371 | orchestrator | 2026-02-27 01:08:40 | INFO  | Task 5967a692-43e0-454d-baf1-9d1efa894588 is in state STARTED 2026-02-27 01:08:40.606809 | orchestrator | 2026-02-27 01:08:40 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:08:40.606843 | orchestrator | 2026-02-27 01:08:40 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:08:43.877701 | orchestrator | 2026-02-27 01:08:43 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:08:43.878166 | orchestrator | 2026-02-27 01:08:43 | INFO  | Task d8da18cf-a47b-4d56-b847-7a774ee86d8e is in state STARTED 2026-02-27 01:08:43.879050 | orchestrator | 2026-02-27 01:08:43 | INFO  | Task 5967a692-43e0-454d-baf1-9d1efa894588 is in state STARTED 2026-02-27 01:08:43.880400 | orchestrator | 2026-02-27 01:08:43 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:08:43.880442 | orchestrator | 2026-02-27 01:08:43 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:08:47.110209 | orchestrator | 2026-02-27 01:08:47 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:08:47.110508 | orchestrator | 2026-02-27 01:08:47 | INFO  | Task d8da18cf-a47b-4d56-b847-7a774ee86d8e is in state STARTED 2026-02-27 01:08:47.111059 | orchestrator | 2026-02-27 01:08:47 | INFO  | Task 5967a692-43e0-454d-baf1-9d1efa894588 is in state STARTED 2026-02-27 01:08:47.112614 | orchestrator | 2026-02-27 01:08:47 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:08:47.112635 | orchestrator | 2026-02-27 01:08:47 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:08:50.248495 | orchestrator | 2026-02-27 01:08:50 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:08:50.248585 | orchestrator | 2026-02-27 01:08:50 | INFO  | Task d8da18cf-a47b-4d56-b847-7a774ee86d8e is in state SUCCESS 2026-02-27 01:08:50.248600 | orchestrator | 2026-02-27 01:08:50 | INFO  | Task 5967a692-43e0-454d-baf1-9d1efa894588 is in state STARTED 2026-02-27 01:08:50.248612 | orchestrator | 2026-02-27 01:08:50 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:08:50.248624 | orchestrator | 2026-02-27 01:08:50 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:08:53.169834 | orchestrator | 2026-02-27 01:08:53 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:08:53.174267 | orchestrator | 2026-02-27 01:08:53 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:08:53.177176 | orchestrator | 2026-02-27 01:08:53 | INFO  | Task 5967a692-43e0-454d-baf1-9d1efa894588 is in state STARTED 2026-02-27 01:08:53.177253 | orchestrator | 2026-02-27 01:08:53 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:08:53.177267 | orchestrator | 2026-02-27 01:08:53 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:08:56.203800 | orchestrator | 2026-02-27 01:08:56 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:08:56.204333 | orchestrator | 2026-02-27 01:08:56 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:08:56.208068 | orchestrator | 2026-02-27 01:08:56 | INFO  | Task 5967a692-43e0-454d-baf1-9d1efa894588 is in state STARTED 2026-02-27 01:08:56.208829 | orchestrator | 2026-02-27 01:08:56 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:08:56.209016 | orchestrator | 2026-02-27 01:08:56 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:08:59.236691 | orchestrator | 2026-02-27 01:08:59 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:08:59.237451 | orchestrator | 2026-02-27 01:08:59 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:08:59.238285 | orchestrator | 2026-02-27 01:08:59 | INFO  | Task 5967a692-43e0-454d-baf1-9d1efa894588 is in state STARTED 2026-02-27 01:08:59.238960 | orchestrator | 2026-02-27 01:08:59 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:08:59.238983 | orchestrator | 2026-02-27 01:08:59 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:09:02.276887 | orchestrator | 2026-02-27 01:09:02 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:09:02.280108 | orchestrator | 2026-02-27 01:09:02 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:09:02.282340 | orchestrator | 2026-02-27 01:09:02 | INFO  | Task 5967a692-43e0-454d-baf1-9d1efa894588 is in state STARTED 2026-02-27 01:09:02.283521 | orchestrator | 2026-02-27 01:09:02 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:09:02.283569 | orchestrator | 2026-02-27 01:09:02 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:09:05.325398 | orchestrator | 2026-02-27 01:09:05 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:09:05.327117 | orchestrator | 2026-02-27 01:09:05 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:09:05.328945 | orchestrator | 2026-02-27 01:09:05 | INFO  | Task 5967a692-43e0-454d-baf1-9d1efa894588 is in state STARTED 2026-02-27 01:09:05.331534 | orchestrator | 2026-02-27 01:09:05 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:09:05.331575 | orchestrator | 2026-02-27 01:09:05 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:09:08.406565 | orchestrator | 2026-02-27 01:09:08 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:09:08.408237 | orchestrator | 2026-02-27 01:09:08 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:09:08.411685 | orchestrator | 2026-02-27 01:09:08 | INFO  | Task 5967a692-43e0-454d-baf1-9d1efa894588 is in state STARTED 2026-02-27 01:09:08.413879 | orchestrator | 2026-02-27 01:09:08 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:09:08.413984 | orchestrator | 2026-02-27 01:09:08 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:09:11.466735 | orchestrator | 2026-02-27 01:09:11 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:09:11.468730 | orchestrator | 2026-02-27 01:09:11 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:09:11.471728 | orchestrator | 2026-02-27 01:09:11 | INFO  | Task 5967a692-43e0-454d-baf1-9d1efa894588 is in state STARTED 2026-02-27 01:09:11.475093 | orchestrator | 2026-02-27 01:09:11 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:09:11.475153 | orchestrator | 2026-02-27 01:09:11 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:09:14.525269 | orchestrator | 2026-02-27 01:09:14 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:09:14.526275 | orchestrator | 2026-02-27 01:09:14 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:09:14.527128 | orchestrator | 2026-02-27 01:09:14 | INFO  | Task 5967a692-43e0-454d-baf1-9d1efa894588 is in state STARTED 2026-02-27 01:09:14.528680 | orchestrator | 2026-02-27 01:09:14 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:09:14.528735 | orchestrator | 2026-02-27 01:09:14 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:09:17.561601 | orchestrator | 2026-02-27 01:09:17 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:09:17.561693 | orchestrator | 2026-02-27 01:09:17 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:09:17.563466 | orchestrator | 2026-02-27 01:09:17 | INFO  | Task 5967a692-43e0-454d-baf1-9d1efa894588 is in state STARTED 2026-02-27 01:09:17.563503 | orchestrator | 2026-02-27 01:09:17 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:09:17.563512 | orchestrator | 2026-02-27 01:09:17 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:09:20.595452 | orchestrator | 2026-02-27 01:09:20 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:09:20.598210 | orchestrator | 2026-02-27 01:09:20 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:09:20.599726 | orchestrator | 2026-02-27 01:09:20 | INFO  | Task 5967a692-43e0-454d-baf1-9d1efa894588 is in state STARTED 2026-02-27 01:09:20.601177 | orchestrator | 2026-02-27 01:09:20 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:09:20.601659 | orchestrator | 2026-02-27 01:09:20 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:09:23.636430 | orchestrator | 2026-02-27 01:09:23 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:09:23.637875 | orchestrator | 2026-02-27 01:09:23 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:09:23.639886 | orchestrator | 2026-02-27 01:09:23 | INFO  | Task 5967a692-43e0-454d-baf1-9d1efa894588 is in state STARTED 2026-02-27 01:09:23.641681 | orchestrator | 2026-02-27 01:09:23 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:09:23.642070 | orchestrator | 2026-02-27 01:09:23 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:09:26.693213 | orchestrator | 2026-02-27 01:09:26 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:09:26.697132 | orchestrator | 2026-02-27 01:09:26 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:09:26.698907 | orchestrator | 2026-02-27 01:09:26 | INFO  | Task 5967a692-43e0-454d-baf1-9d1efa894588 is in state STARTED 2026-02-27 01:09:26.700333 | orchestrator | 2026-02-27 01:09:26 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:09:26.700970 | orchestrator | 2026-02-27 01:09:26 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:09:29.759544 | orchestrator | 2026-02-27 01:09:29.759663 | orchestrator | 2026-02-27 01:09:29.759687 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-27 01:09:29.759705 | orchestrator | 2026-02-27 01:09:29.759724 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-27 01:09:29.759743 | orchestrator | Friday 27 February 2026 01:08:01 +0000 (0:00:00.184) 0:00:00.184 ******* 2026-02-27 01:09:29.759761 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:09:29.759801 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:09:29.759819 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:09:29.759836 | orchestrator | 2026-02-27 01:09:29.759853 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-27 01:09:29.759870 | orchestrator | Friday 27 February 2026 01:08:01 +0000 (0:00:00.322) 0:00:00.506 ******* 2026-02-27 01:09:29.759887 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-02-27 01:09:29.759905 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-02-27 01:09:29.760040 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-02-27 01:09:29.760062 | orchestrator | 2026-02-27 01:09:29.760080 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-02-27 01:09:29.760099 | orchestrator | 2026-02-27 01:09:29.760118 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-02-27 01:09:29.760137 | orchestrator | Friday 27 February 2026 01:08:02 +0000 (0:00:01.045) 0:00:01.552 ******* 2026-02-27 01:09:29.760155 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:09:29.760172 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:09:29.760189 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:09:29.760206 | orchestrator | 2026-02-27 01:09:29.760221 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 01:09:29.760236 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 01:09:29.760278 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 01:09:29.760292 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 01:09:29.760306 | orchestrator | 2026-02-27 01:09:29.760320 | orchestrator | 2026-02-27 01:09:29.760334 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 01:09:29.760347 | orchestrator | Friday 27 February 2026 01:08:03 +0000 (0:00:00.818) 0:00:02.372 ******* 2026-02-27 01:09:29.760376 | orchestrator | =============================================================================== 2026-02-27 01:09:29.760391 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.05s 2026-02-27 01:09:29.760405 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.82s 2026-02-27 01:09:29.760419 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2026-02-27 01:09:29.760433 | orchestrator | 2026-02-27 01:09:29.760447 | orchestrator | 2026-02-27 01:09:29.760461 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-27 01:09:29.760475 | orchestrator | 2026-02-27 01:09:29.760490 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-27 01:09:29.760503 | orchestrator | Friday 27 February 2026 01:08:08 +0000 (0:00:00.420) 0:00:00.420 ******* 2026-02-27 01:09:29.760517 | orchestrator | ok: [testbed-manager] 2026-02-27 01:09:29.760531 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:09:29.760544 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:09:29.760558 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:09:29.760571 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:09:29.760585 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:09:29.760599 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:09:29.760612 | orchestrator | 2026-02-27 01:09:29.760626 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-27 01:09:29.760640 | orchestrator | Friday 27 February 2026 01:08:10 +0000 (0:00:01.311) 0:00:01.731 ******* 2026-02-27 01:09:29.760654 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-02-27 01:09:29.760668 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-02-27 01:09:29.760682 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-02-27 01:09:29.760696 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-02-27 01:09:29.760709 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-02-27 01:09:29.760723 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-02-27 01:09:29.760736 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-02-27 01:09:29.760750 | orchestrator | 2026-02-27 01:09:29.760764 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-02-27 01:09:29.760777 | orchestrator | 2026-02-27 01:09:29.760791 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-02-27 01:09:29.760805 | orchestrator | Friday 27 February 2026 01:08:12 +0000 (0:00:02.018) 0:00:03.750 ******* 2026-02-27 01:09:29.760820 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-27 01:09:29.760835 | orchestrator | 2026-02-27 01:09:29.760848 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-02-27 01:09:29.760862 | orchestrator | Friday 27 February 2026 01:08:15 +0000 (0:00:03.068) 0:00:06.819 ******* 2026-02-27 01:09:29.760876 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-02-27 01:09:29.760890 | orchestrator | 2026-02-27 01:09:29.760903 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-02-27 01:09:29.760917 | orchestrator | Friday 27 February 2026 01:08:20 +0000 (0:00:05.435) 0:00:12.254 ******* 2026-02-27 01:09:29.760964 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-02-27 01:09:29.760998 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-02-27 01:09:29.761012 | orchestrator | 2026-02-27 01:09:29.761025 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-02-27 01:09:29.761039 | orchestrator | Friday 27 February 2026 01:08:28 +0000 (0:00:08.214) 0:00:20.469 ******* 2026-02-27 01:09:29.761053 | orchestrator | ok: [testbed-manager] => (item=service) 2026-02-27 01:09:29.761066 | orchestrator | 2026-02-27 01:09:29.761080 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-02-27 01:09:29.761094 | orchestrator | Friday 27 February 2026 01:08:32 +0000 (0:00:03.189) 0:00:23.658 ******* 2026-02-27 01:09:29.761107 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-27 01:09:29.761121 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-02-27 01:09:29.761135 | orchestrator | 2026-02-27 01:09:29.761148 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-02-27 01:09:29.761161 | orchestrator | Friday 27 February 2026 01:08:36 +0000 (0:00:04.378) 0:00:28.037 ******* 2026-02-27 01:09:29.761174 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-02-27 01:09:29.761187 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-02-27 01:09:29.761199 | orchestrator | 2026-02-27 01:09:29.761213 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-02-27 01:09:29.761227 | orchestrator | Friday 27 February 2026 01:08:42 +0000 (0:00:06.203) 0:00:34.240 ******* 2026-02-27 01:09:29.761241 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-02-27 01:09:29.761255 | orchestrator | 2026-02-27 01:09:29.761268 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 01:09:29.761282 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 01:09:29.761296 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 01:09:29.761310 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 01:09:29.761331 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 01:09:29.761344 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 01:09:29.761357 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 01:09:29.761370 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 01:09:29.761384 | orchestrator | 2026-02-27 01:09:29.761397 | orchestrator | 2026-02-27 01:09:29.761411 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 01:09:29.761425 | orchestrator | Friday 27 February 2026 01:08:48 +0000 (0:00:06.345) 0:00:40.585 ******* 2026-02-27 01:09:29.761439 | orchestrator | =============================================================================== 2026-02-27 01:09:29.761452 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 8.21s 2026-02-27 01:09:29.761466 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 6.35s 2026-02-27 01:09:29.761480 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.20s 2026-02-27 01:09:29.761494 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 5.44s 2026-02-27 01:09:29.761507 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.38s 2026-02-27 01:09:29.761529 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.19s 2026-02-27 01:09:29.761543 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 3.07s 2026-02-27 01:09:29.761556 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.02s 2026-02-27 01:09:29.761570 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.31s 2026-02-27 01:09:29.761583 | orchestrator | 2026-02-27 01:09:29.761597 | orchestrator | 2026-02-27 01:09:29.761610 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-27 01:09:29.761624 | orchestrator | 2026-02-27 01:09:29.761638 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-27 01:09:29.761651 | orchestrator | Friday 27 February 2026 01:07:18 +0000 (0:00:00.512) 0:00:00.512 ******* 2026-02-27 01:09:29.761665 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:09:29.761677 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:09:29.761691 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:09:29.761703 | orchestrator | 2026-02-27 01:09:29.761716 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-27 01:09:29.761728 | orchestrator | Friday 27 February 2026 01:07:18 +0000 (0:00:00.472) 0:00:00.984 ******* 2026-02-27 01:09:29.761741 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-02-27 01:09:29.761755 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-02-27 01:09:29.761769 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-02-27 01:09:29.761783 | orchestrator | 2026-02-27 01:09:29.761797 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-02-27 01:09:29.761811 | orchestrator | 2026-02-27 01:09:29.761824 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-27 01:09:29.761838 | orchestrator | Friday 27 February 2026 01:07:18 +0000 (0:00:00.508) 0:00:01.493 ******* 2026-02-27 01:09:29.761860 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:09:29.761972 | orchestrator | 2026-02-27 01:09:29.761988 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-02-27 01:09:29.762003 | orchestrator | Friday 27 February 2026 01:07:19 +0000 (0:00:00.518) 0:00:02.011 ******* 2026-02-27 01:09:29.762071 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-02-27 01:09:29.762088 | orchestrator | 2026-02-27 01:09:29.762102 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-02-27 01:09:29.762115 | orchestrator | Friday 27 February 2026 01:07:23 +0000 (0:00:03.883) 0:00:05.895 ******* 2026-02-27 01:09:29.762129 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-02-27 01:09:29.762143 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-02-27 01:09:29.762157 | orchestrator | 2026-02-27 01:09:29.762171 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-02-27 01:09:29.762185 | orchestrator | Friday 27 February 2026 01:07:30 +0000 (0:00:07.226) 0:00:13.122 ******* 2026-02-27 01:09:29.762199 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-27 01:09:29.762212 | orchestrator | 2026-02-27 01:09:29.762226 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-02-27 01:09:29.762240 | orchestrator | Friday 27 February 2026 01:07:34 +0000 (0:00:03.529) 0:00:16.651 ******* 2026-02-27 01:09:29.762254 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-27 01:09:29.762268 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-02-27 01:09:29.762281 | orchestrator | 2026-02-27 01:09:29.762294 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-02-27 01:09:29.762309 | orchestrator | Friday 27 February 2026 01:07:38 +0000 (0:00:04.571) 0:00:21.223 ******* 2026-02-27 01:09:29.762323 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-27 01:09:29.762347 | orchestrator | 2026-02-27 01:09:29.762361 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-02-27 01:09:29.762374 | orchestrator | Friday 27 February 2026 01:07:42 +0000 (0:00:03.714) 0:00:24.937 ******* 2026-02-27 01:09:29.762388 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-02-27 01:09:29.762402 | orchestrator | 2026-02-27 01:09:29.762416 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-02-27 01:09:29.762436 | orchestrator | Friday 27 February 2026 01:07:46 +0000 (0:00:04.362) 0:00:29.299 ******* 2026-02-27 01:09:29.762450 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:09:29.762464 | orchestrator | 2026-02-27 01:09:29.762477 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-02-27 01:09:29.762491 | orchestrator | Friday 27 February 2026 01:07:50 +0000 (0:00:04.074) 0:00:33.374 ******* 2026-02-27 01:09:29.762504 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:09:29.762518 | orchestrator | 2026-02-27 01:09:29.762531 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-02-27 01:09:29.762545 | orchestrator | Friday 27 February 2026 01:07:55 +0000 (0:00:04.195) 0:00:37.569 ******* 2026-02-27 01:09:29.762558 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:09:29.762572 | orchestrator | 2026-02-27 01:09:29.762586 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-02-27 01:09:29.762599 | orchestrator | Friday 27 February 2026 01:07:58 +0000 (0:00:03.412) 0:00:40.982 ******* 2026-02-27 01:09:29.762616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-27 01:09:29.762641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-27 01:09:29.762656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-27 01:09:29.762683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-27 01:09:29.762697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-27 01:09:29.762711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-27 01:09:29.762725 | orchestrator | 2026-02-27 01:09:29.762738 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-02-27 01:09:29.762751 | orchestrator | Friday 27 February 2026 01:08:00 +0000 (0:00:01.582) 0:00:42.564 ******* 2026-02-27 01:09:29.762764 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:09:29.762777 | orchestrator | 2026-02-27 01:09:29.762790 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-02-27 01:09:29.762804 | orchestrator | Friday 27 February 2026 01:08:00 +0000 (0:00:00.169) 0:00:42.733 ******* 2026-02-27 01:09:29.762817 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:09:29.762830 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:09:29.762843 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:09:29.762856 | orchestrator | 2026-02-27 01:09:29.762870 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-02-27 01:09:29.762883 | orchestrator | Friday 27 February 2026 01:08:00 +0000 (0:00:00.565) 0:00:43.299 ******* 2026-02-27 01:09:29.762896 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-27 01:09:29.762910 | orchestrator | 2026-02-27 01:09:29.762950 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-02-27 01:09:29.762964 | orchestrator | Friday 27 February 2026 01:08:01 +0000 (0:00:01.020) 0:00:44.320 ******* 2026-02-27 01:09:29.762977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-27 01:09:29.763015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-27 01:09:29.763030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-27 01:09:29.763044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-27 01:09:29.763067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-27 01:09:29.763091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-27 01:09:29.763105 | orchestrator | 2026-02-27 01:09:29.763118 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-02-27 01:09:29.763132 | orchestrator | Friday 27 February 2026 01:08:04 +0000 (0:00:02.834) 0:00:47.154 ******* 2026-02-27 01:09:29.763145 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:09:29.763158 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:09:29.763172 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:09:29.763185 | orchestrator | 2026-02-27 01:09:29.763199 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-27 01:09:29.763212 | orchestrator | Friday 27 February 2026 01:08:04 +0000 (0:00:00.329) 0:00:47.483 ******* 2026-02-27 01:09:29.763226 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:09:29.763239 | orchestrator | 2026-02-27 01:09:29.763252 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-02-27 01:09:29.763266 | orchestrator | Friday 27 February 2026 01:08:05 +0000 (0:00:00.793) 0:00:48.277 ******* 2026-02-27 01:09:29.763284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-27 01:09:29.763299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-27 01:09:29.763321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-27 01:09:29.763343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-27 01:09:29.763368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-27 01:09:29.763383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-27 01:09:29.763397 | orchestrator | 2026-02-27 01:09:29.763410 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-02-27 01:09:29.763423 | orchestrator | Friday 27 February 2026 01:08:08 +0000 (0:00:02.724) 0:00:51.002 ******* 2026-02-27 01:09:29.763437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-27 01:09:29.763467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-27 01:09:29.763482 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:09:29.763496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-27 01:09:29.763516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-27 01:09:29.763530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-27 01:09:29.763544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-27 01:09:29.763566 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:09:29.763580 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:09:29.763593 | orchestrator | 2026-02-27 01:09:29.763607 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-02-27 01:09:29.763620 | orchestrator | Friday 27 February 2026 01:08:09 +0000 (0:00:00.724) 0:00:51.726 ******* 2026-02-27 01:09:29.763639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', '2026-02-27 01:09:29 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:09:29.763653 | orchestrator | 2026-02-27 01:09:29 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:09:29.763665 | orchestrator | 2026-02-27 01:09:29 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:09:29.763677 | orchestrator | 2026-02-27 01:09:29 | INFO  | Task 5967a692-43e0-454d-baf1-9d1efa894588 is in state SUCCESS 2026-02-27 01:09:29.763690 | orchestrator | 2026-02-27 01:09:29 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:09:29.763703 | orchestrator | 2026-02-27 01:09:29 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:09:29.763717 | orchestrator | enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-27 01:09:29.763737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-27 01:09:29.763751 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:09:29.763763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-27 01:09:29.763785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-27 01:09:29.763799 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:09:29.763823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-27 01:09:29.763837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-27 01:09:29.763851 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:09:29.763865 | orchestrator | 2026-02-27 01:09:29.763877 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-02-27 01:09:29.763890 | orchestrator | Friday 27 February 2026 01:08:11 +0000 (0:00:02.577) 0:00:54.304 ******* 2026-02-27 01:09:29.763909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-27 01:09:29.763959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-27 01:09:29.763983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-27 01:09:29.763996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-27 01:09:29.764014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-27 01:09:29.764028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-27 01:09:29.764050 | orchestrator | 2026-02-27 01:09:29.764063 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-02-27 01:09:29.764076 | orchestrator | Friday 27 February 2026 01:08:15 +0000 (0:00:03.477) 0:00:57.781 ******* 2026-02-27 01:09:29.764090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-27 01:09:29.764113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-27 01:09:29.764127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-27 01:09:29.764147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-27 01:09:29.764175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-27 01:09:29.764188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-27 01:09:29.764202 | orchestrator | 2026-02-27 01:09:29.764217 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-02-27 01:09:29.764230 | orchestrator | Friday 27 February 2026 01:08:27 +0000 (0:00:11.922) 0:01:09.704 ******* 2026-02-27 01:09:29.764276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-27 01:09:29.764292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-27 01:09:29.764305 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:09:29.764324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-27 01:09:29.764348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-27 01:09:29.764360 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:09:29.764382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-27 01:09:29.764396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-27 01:09:29.764410 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:09:29.764424 | orchestrator | 2026-02-27 01:09:29.764437 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-02-27 01:09:29.764450 | orchestrator | Friday 27 February 2026 01:08:29 +0000 (0:00:01.866) 0:01:11.570 ******* 2026-02-27 01:09:29.764469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-27 01:09:29.764496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-27 01:09:29.764509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-27 01:09:29.764530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-27 01:09:29.764544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-27 01:09:29.764563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-27 01:09:29.764584 | orchestrator | 2026-02-27 01:09:29.764597 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-27 01:09:29.764610 | orchestrator | Friday 27 February 2026 01:08:31 +0000 (0:00:02.621) 0:01:14.192 ******* 2026-02-27 01:09:29.764623 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:09:29.764635 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:09:29.764647 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:09:29.764660 | orchestrator | 2026-02-27 01:09:29.764673 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-02-27 01:09:29.764686 | orchestrator | Friday 27 February 2026 01:08:32 +0000 (0:00:00.334) 0:01:14.527 ******* 2026-02-27 01:09:29.764699 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:09:29.764711 | orchestrator | 2026-02-27 01:09:29.764724 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-02-27 01:09:29.764736 | orchestrator | Friday 27 February 2026 01:08:34 +0000 (0:00:02.394) 0:01:16.921 ******* 2026-02-27 01:09:29.764748 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:09:29.764761 | orchestrator | 2026-02-27 01:09:29.764774 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-02-27 01:09:29.764786 | orchestrator | Friday 27 February 2026 01:08:37 +0000 (0:00:02.789) 0:01:19.711 ******* 2026-02-27 01:09:29.764799 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:09:29.764812 | orchestrator | 2026-02-27 01:09:29.764824 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-27 01:09:29.764836 | orchestrator | Friday 27 February 2026 01:08:54 +0000 (0:00:17.143) 0:01:36.855 ******* 2026-02-27 01:09:29.764848 | orchestrator | 2026-02-27 01:09:29.764861 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-27 01:09:29.764872 | orchestrator | Friday 27 February 2026 01:08:54 +0000 (0:00:00.282) 0:01:37.137 ******* 2026-02-27 01:09:29.764885 | orchestrator | 2026-02-27 01:09:29.764897 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-27 01:09:29.764909 | orchestrator | Friday 27 February 2026 01:08:54 +0000 (0:00:00.190) 0:01:37.328 ******* 2026-02-27 01:09:29.764988 | orchestrator | 2026-02-27 01:09:29.765003 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-02-27 01:09:29.765015 | orchestrator | Friday 27 February 2026 01:08:55 +0000 (0:00:00.386) 0:01:37.715 ******* 2026-02-27 01:09:29.765028 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:09:29.765041 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:09:29.765054 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:09:29.765067 | orchestrator | 2026-02-27 01:09:29.765080 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-02-27 01:09:29.765093 | orchestrator | Friday 27 February 2026 01:09:14 +0000 (0:00:19.588) 0:01:57.304 ******* 2026-02-27 01:09:29.765106 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:09:29.765119 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:09:29.765132 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:09:29.765145 | orchestrator | 2026-02-27 01:09:29.765157 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 01:09:29.765171 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-27 01:09:29.765194 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-27 01:09:29.765207 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-27 01:09:29.765230 | orchestrator | 2026-02-27 01:09:29.765243 | orchestrator | 2026-02-27 01:09:29.765255 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 01:09:29.765268 | orchestrator | Friday 27 February 2026 01:09:26 +0000 (0:00:11.723) 0:02:09.027 ******* 2026-02-27 01:09:29.765280 | orchestrator | =============================================================================== 2026-02-27 01:09:29.765293 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 19.59s 2026-02-27 01:09:29.765306 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 17.14s 2026-02-27 01:09:29.765319 | orchestrator | magnum : Copying over magnum.conf -------------------------------------- 11.92s 2026-02-27 01:09:29.765333 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 11.72s 2026-02-27 01:09:29.765346 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 7.23s 2026-02-27 01:09:29.765359 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.57s 2026-02-27 01:09:29.765372 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.36s 2026-02-27 01:09:29.765384 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.20s 2026-02-27 01:09:29.765397 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 4.07s 2026-02-27 01:09:29.765410 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.88s 2026-02-27 01:09:29.765423 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.71s 2026-02-27 01:09:29.765436 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.53s 2026-02-27 01:09:29.765448 | orchestrator | magnum : Copying over config.json files for services -------------------- 3.48s 2026-02-27 01:09:29.765463 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.41s 2026-02-27 01:09:29.765476 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.83s 2026-02-27 01:09:29.765498 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.79s 2026-02-27 01:09:29.765506 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.72s 2026-02-27 01:09:29.765513 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.62s 2026-02-27 01:09:29.765520 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS key ------ 2.58s 2026-02-27 01:09:29.765526 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.39s 2026-02-27 01:09:32.804386 | orchestrator | 2026-02-27 01:09:32 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:09:32.806692 | orchestrator | 2026-02-27 01:09:32 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:09:32.811693 | orchestrator | 2026-02-27 01:09:32 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:09:32.813017 | orchestrator | 2026-02-27 01:09:32 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:09:32.813059 | orchestrator | 2026-02-27 01:09:32 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:09:35.860627 | orchestrator | 2026-02-27 01:09:35 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:09:35.861575 | orchestrator | 2026-02-27 01:09:35 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:09:35.861666 | orchestrator | 2026-02-27 01:09:35 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:09:35.863046 | orchestrator | 2026-02-27 01:09:35 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:09:35.863121 | orchestrator | 2026-02-27 01:09:35 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:09:38.899765 | orchestrator | 2026-02-27 01:09:38 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:09:38.900383 | orchestrator | 2026-02-27 01:09:38 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:09:38.901382 | orchestrator | 2026-02-27 01:09:38 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:09:38.904016 | orchestrator | 2026-02-27 01:09:38 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:09:38.904050 | orchestrator | 2026-02-27 01:09:38 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:09:41.937481 | orchestrator | 2026-02-27 01:09:41 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:09:41.940842 | orchestrator | 2026-02-27 01:09:41 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:09:41.941551 | orchestrator | 2026-02-27 01:09:41 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:09:41.943801 | orchestrator | 2026-02-27 01:09:41 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state STARTED 2026-02-27 01:09:41.943846 | orchestrator | 2026-02-27 01:09:41 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:09:44.983811 | orchestrator | 2026-02-27 01:09:44 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:09:44.984449 | orchestrator | 2026-02-27 01:09:44 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:09:44.987073 | orchestrator | 2026-02-27 01:09:44 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:09:44.990214 | orchestrator | 2026-02-27 01:09:44 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:09:44.995367 | orchestrator | 2026-02-27 01:09:44 | INFO  | Task 4750cef0-fbbd-4d25-b2b9-46cd025f04af is in state SUCCESS 2026-02-27 01:09:44.997463 | orchestrator | 2026-02-27 01:09:44.997504 | orchestrator | 2026-02-27 01:09:44.997516 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-27 01:09:44.997529 | orchestrator | 2026-02-27 01:09:44.997540 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-27 01:09:44.997553 | orchestrator | Friday 27 February 2026 01:04:33 +0000 (0:00:00.307) 0:00:00.307 ******* 2026-02-27 01:09:44.997565 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:09:44.997577 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:09:44.997588 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:09:44.997599 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:09:44.997609 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:09:44.997620 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:09:44.997631 | orchestrator | 2026-02-27 01:09:44.997642 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-27 01:09:44.997653 | orchestrator | Friday 27 February 2026 01:04:35 +0000 (0:00:01.414) 0:00:01.722 ******* 2026-02-27 01:09:44.997664 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-02-27 01:09:44.997676 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-02-27 01:09:44.997687 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-02-27 01:09:44.997716 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-02-27 01:09:44.997727 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-02-27 01:09:44.997738 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-02-27 01:09:44.997749 | orchestrator | 2026-02-27 01:09:44.997760 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-02-27 01:09:44.997771 | orchestrator | 2026-02-27 01:09:44.997782 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-27 01:09:44.997793 | orchestrator | Friday 27 February 2026 01:04:36 +0000 (0:00:01.224) 0:00:02.947 ******* 2026-02-27 01:09:44.997804 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-27 01:09:44.997842 | orchestrator | 2026-02-27 01:09:44.997854 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-02-27 01:09:44.997865 | orchestrator | Friday 27 February 2026 01:04:37 +0000 (0:00:01.390) 0:00:04.338 ******* 2026-02-27 01:09:44.997876 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:09:44.997886 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:09:44.997897 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:09:44.997908 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:09:44.997918 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:09:44.997958 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:09:44.997989 | orchestrator | 2026-02-27 01:09:44.998063 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-02-27 01:09:44.998081 | orchestrator | Friday 27 February 2026 01:04:38 +0000 (0:00:01.252) 0:00:05.591 ******* 2026-02-27 01:09:44.998094 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:09:44.998106 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:09:44.998120 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:09:44.998132 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:09:44.998145 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:09:44.998157 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:09:44.998170 | orchestrator | 2026-02-27 01:09:44.998183 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-02-27 01:09:44.998195 | orchestrator | Friday 27 February 2026 01:04:39 +0000 (0:00:01.042) 0:00:06.633 ******* 2026-02-27 01:09:44.998208 | orchestrator | ok: [testbed-node-0] => { 2026-02-27 01:09:44.998221 | orchestrator |  "changed": false, 2026-02-27 01:09:44.998234 | orchestrator |  "msg": "All assertions passed" 2026-02-27 01:09:44.998247 | orchestrator | } 2026-02-27 01:09:44.998260 | orchestrator | ok: [testbed-node-1] => { 2026-02-27 01:09:44.998272 | orchestrator |  "changed": false, 2026-02-27 01:09:44.998285 | orchestrator |  "msg": "All assertions passed" 2026-02-27 01:09:44.998311 | orchestrator | } 2026-02-27 01:09:44.998339 | orchestrator | ok: [testbed-node-2] => { 2026-02-27 01:09:44.998358 | orchestrator |  "changed": false, 2026-02-27 01:09:44.998389 | orchestrator |  "msg": "All assertions passed" 2026-02-27 01:09:44.998406 | orchestrator | } 2026-02-27 01:09:44.998423 | orchestrator | ok: [testbed-node-3] => { 2026-02-27 01:09:44.998441 | orchestrator |  "changed": false, 2026-02-27 01:09:44.998458 | orchestrator |  "msg": "All assertions passed" 2026-02-27 01:09:44.998476 | orchestrator | } 2026-02-27 01:09:44.998491 | orchestrator | ok: [testbed-node-4] => { 2026-02-27 01:09:44.998507 | orchestrator |  "changed": false, 2026-02-27 01:09:44.998524 | orchestrator |  "msg": "All assertions passed" 2026-02-27 01:09:44.998543 | orchestrator | } 2026-02-27 01:09:44.998561 | orchestrator | ok: [testbed-node-5] => { 2026-02-27 01:09:44.998581 | orchestrator |  "changed": false, 2026-02-27 01:09:44.998600 | orchestrator |  "msg": "All assertions passed" 2026-02-27 01:09:44.998619 | orchestrator | } 2026-02-27 01:09:44.998637 | orchestrator | 2026-02-27 01:09:44.998653 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-02-27 01:09:44.998664 | orchestrator | Friday 27 February 2026 01:04:40 +0000 (0:00:00.965) 0:00:07.598 ******* 2026-02-27 01:09:44.998675 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:09:44.998686 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:09:44.998697 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:09:44.998708 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:09:44.998719 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:09:44.998729 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:09:44.998740 | orchestrator | 2026-02-27 01:09:44.998751 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-02-27 01:09:44.998763 | orchestrator | Friday 27 February 2026 01:04:41 +0000 (0:00:00.743) 0:00:08.342 ******* 2026-02-27 01:09:44.998773 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-02-27 01:09:44.998799 | orchestrator | 2026-02-27 01:09:44.998810 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-02-27 01:09:44.998820 | orchestrator | Friday 27 February 2026 01:04:45 +0000 (0:00:03.639) 0:00:11.981 ******* 2026-02-27 01:09:44.998831 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-02-27 01:09:44.998844 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-02-27 01:09:44.998855 | orchestrator | 2026-02-27 01:09:44.998881 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-02-27 01:09:44.998892 | orchestrator | Friday 27 February 2026 01:04:52 +0000 (0:00:06.863) 0:00:18.845 ******* 2026-02-27 01:09:44.998903 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-27 01:09:44.998914 | orchestrator | 2026-02-27 01:09:44.998925 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-02-27 01:09:44.999028 | orchestrator | Friday 27 February 2026 01:04:55 +0000 (0:00:03.572) 0:00:22.417 ******* 2026-02-27 01:09:44.999040 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-27 01:09:44.999051 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-02-27 01:09:44.999062 | orchestrator | 2026-02-27 01:09:44.999073 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-02-27 01:09:44.999084 | orchestrator | Friday 27 February 2026 01:05:00 +0000 (0:00:04.476) 0:00:26.894 ******* 2026-02-27 01:09:44.999095 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-27 01:09:44.999106 | orchestrator | 2026-02-27 01:09:44.999116 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-02-27 01:09:44.999127 | orchestrator | Friday 27 February 2026 01:05:04 +0000 (0:00:03.949) 0:00:30.843 ******* 2026-02-27 01:09:44.999147 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-02-27 01:09:44.999158 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-02-27 01:09:44.999168 | orchestrator | 2026-02-27 01:09:44.999179 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-27 01:09:44.999190 | orchestrator | Friday 27 February 2026 01:05:12 +0000 (0:00:08.132) 0:00:38.976 ******* 2026-02-27 01:09:44.999201 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:09:44.999212 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:09:44.999222 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:09:44.999233 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:09:44.999244 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:09:44.999255 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:09:44.999266 | orchestrator | 2026-02-27 01:09:44.999277 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-02-27 01:09:44.999288 | orchestrator | Friday 27 February 2026 01:05:12 +0000 (0:00:00.651) 0:00:39.627 ******* 2026-02-27 01:09:44.999299 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:09:44.999310 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:09:44.999320 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:09:44.999331 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:09:44.999342 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:09:44.999352 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:09:44.999363 | orchestrator | 2026-02-27 01:09:44.999374 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-02-27 01:09:44.999385 | orchestrator | Friday 27 February 2026 01:05:15 +0000 (0:00:02.142) 0:00:41.770 ******* 2026-02-27 01:09:44.999396 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:09:44.999407 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:09:44.999418 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:09:44.999429 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:09:44.999440 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:09:44.999450 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:09:44.999461 | orchestrator | 2026-02-27 01:09:44.999472 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-27 01:09:44.999492 | orchestrator | Friday 27 February 2026 01:05:17 +0000 (0:00:01.930) 0:00:43.700 ******* 2026-02-27 01:09:44.999503 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:09:44.999513 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:09:44.999524 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:09:44.999535 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:09:44.999546 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:09:44.999556 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:09:44.999567 | orchestrator | 2026-02-27 01:09:44.999578 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-02-27 01:09:44.999589 | orchestrator | Friday 27 February 2026 01:05:19 +0000 (0:00:02.321) 0:00:46.021 ******* 2026-02-27 01:09:44.999630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-27 01:09:44.999670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-27 01:09:44.999689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-27 01:09:44.999702 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-27 01:09:44.999722 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-27 01:09:44.999734 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-27 01:09:44.999746 | orchestrator | 2026-02-27 01:09:44.999757 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-02-27 01:09:44.999768 | orchestrator | Friday 27 February 2026 01:05:22 +0000 (0:00:02.914) 0:00:48.936 ******* 2026-02-27 01:09:44.999780 | orchestrator | [WARNING]: Skipped 2026-02-27 01:09:44.999791 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-02-27 01:09:44.999802 | orchestrator | due to this access issue: 2026-02-27 01:09:44.999813 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-02-27 01:09:44.999824 | orchestrator | a directory 2026-02-27 01:09:44.999835 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-27 01:09:44.999847 | orchestrator | 2026-02-27 01:09:44.999868 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-27 01:09:44.999880 | orchestrator | Friday 27 February 2026 01:05:23 +0000 (0:00:00.950) 0:00:49.886 ******* 2026-02-27 01:09:44.999891 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-27 01:09:44.999904 | orchestrator | 2026-02-27 01:09:44.999915 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-02-27 01:09:44.999926 | orchestrator | Friday 27 February 2026 01:05:24 +0000 (0:00:01.325) 0:00:51.212 ******* 2026-02-27 01:09:44.999970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-27 01:09:44.999993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-27 01:09:45.000005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-27 01:09:45.000018 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-27 01:09:45.000040 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-27 01:09:45.000057 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-27 01:09:45.000075 | orchestrator | 2026-02-27 01:09:45.000086 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-02-27 01:09:45.000097 | orchestrator | Friday 27 February 2026 01:05:28 +0000 (0:00:04.050) 0:00:55.262 ******* 2026-02-27 01:09:45.000109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-27 01:09:45.000120 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:09:45.000132 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-27 01:09:45.000144 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:09:45.000163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-27 01:09:45.000175 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:09:45.000193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-27 01:09:45.000215 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:09:45.000227 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-27 01:09:45.000238 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:09:45.000250 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-27 01:09:45.000261 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:09:45.000272 | orchestrator | 2026-02-27 01:09:45.000283 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-02-27 01:09:45.000295 | orchestrator | Friday 27 February 2026 01:05:32 +0000 (0:00:03.815) 0:00:59.078 ******* 2026-02-27 01:09:45.000307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-27 01:09:45.000318 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:09:45.000338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-27 01:09:45.000359 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:09:45.000381 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-27 01:09:45.000393 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:09:45.000404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-27 01:09:45.000416 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:09:45.000428 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-27 01:09:45.000439 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:09:45.000451 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-27 01:09:45.000462 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:09:45.000473 | orchestrator | 2026-02-27 01:09:45.000485 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-02-27 01:09:45.000501 | orchestrator | Friday 27 February 2026 01:05:36 +0000 (0:00:04.533) 0:01:03.611 ******* 2026-02-27 01:09:45.000512 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:09:45.000528 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:09:45.000540 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:09:45.000550 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:09:45.000561 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:09:45.000572 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:09:45.000583 | orchestrator | 2026-02-27 01:09:45.000595 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-02-27 01:09:45.000614 | orchestrator | Friday 27 February 2026 01:05:40 +0000 (0:00:03.783) 0:01:07.395 ******* 2026-02-27 01:09:45.000633 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:09:45.000652 | orchestrator | 2026-02-27 01:09:45.000671 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-02-27 01:09:45.000689 | orchestrator | Friday 27 February 2026 01:05:40 +0000 (0:00:00.153) 0:01:07.548 ******* 2026-02-27 01:09:45.000709 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:09:45.000721 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:09:45.000732 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:09:45.000743 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:09:45.000760 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:09:45.000771 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:09:45.000782 | orchestrator | 2026-02-27 01:09:45.000793 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-02-27 01:09:45.000804 | orchestrator | Friday 27 February 2026 01:05:41 +0000 (0:00:00.959) 0:01:08.508 ******* 2026-02-27 01:09:45.000815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-27 01:09:45.000827 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:09:45.000838 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-27 01:09:45.000850 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:09:45.000861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-27 01:09:45.000879 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:09:45.000900 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-27 01:09:45.000912 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:09:45.000945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-27 01:09:45.000959 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:09:45.000970 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-27 01:09:45.000982 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:09:45.000993 | orchestrator | 2026-02-27 01:09:45.001004 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-02-27 01:09:45.001016 | orchestrator | Friday 27 February 2026 01:05:46 +0000 (0:00:04.338) 0:01:12.846 ******* 2026-02-27 01:09:45.001027 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-27 01:09:45.001053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-27 01:09:45.001070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-27 01:09:45.001082 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-27 01:09:45.001094 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-27 01:09:45.001105 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-27 01:09:45.001123 | orchestrator | 2026-02-27 01:09:45.001134 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-02-27 01:09:45.001145 | orchestrator | Friday 27 February 2026 01:05:52 +0000 (0:00:06.506) 0:01:19.353 ******* 2026-02-27 01:09:45.001164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-27 01:09:45.001182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-27 01:09:45.001194 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-27 01:09:45.001206 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-27 01:09:45.001218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-27 01:09:45.001241 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-27 01:09:45.001254 | orchestrator | 2026-02-27 01:09:45.001265 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-02-27 01:09:45.001276 | orchestrator | Friday 27 February 2026 01:06:03 +0000 (0:00:10.687) 0:01:30.040 ******* 2026-02-27 01:09:45.001292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-27 01:09:45.001303 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:09:45.001315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-27 01:09:45.001326 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:09:45.001338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-27 01:09:45.001355 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:09:45.001366 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-27 01:09:45.001377 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:09:45.001396 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-27 01:09:45.001407 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:09:45.001424 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-27 01:09:45.001435 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:09:45.001446 | orchestrator | 2026-02-27 01:09:45.001457 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-02-27 01:09:45.001468 | orchestrator | Friday 27 February 2026 01:06:07 +0000 (0:00:03.631) 0:01:33.671 ******* 2026-02-27 01:09:45.001479 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:09:45.001490 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:09:45.001501 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:09:45.001512 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:09:45.001523 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:09:45.001534 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:09:45.001544 | orchestrator | 2026-02-27 01:09:45.001562 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-02-27 01:09:45.001573 | orchestrator | Friday 27 February 2026 01:06:10 +0000 (0:00:03.827) 0:01:37.499 ******* 2026-02-27 01:09:45.001584 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-27 01:09:45.001596 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:09:45.001607 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-27 01:09:45.001618 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:09:45.001637 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-27 01:09:45.001649 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:09:45.001665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-27 01:09:45.001678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-27 01:09:45.001700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-27 01:09:45.001712 | orchestrator | 2026-02-27 01:09:45.001722 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-02-27 01:09:45.001734 | orchestrator | Friday 27 February 2026 01:06:15 +0000 (0:00:04.750) 0:01:42.250 ******* 2026-02-27 01:09:45.001744 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:09:45.001755 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:09:45.001766 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:09:45.001776 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:09:45.001787 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:09:45.001798 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:09:45.001809 | orchestrator | 2026-02-27 01:09:45.001819 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-02-27 01:09:45.001830 | orchestrator | Friday 27 February 2026 01:06:19 +0000 (0:00:03.604) 0:01:45.854 ******* 2026-02-27 01:09:45.001841 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:09:45.001852 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:09:45.001863 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:09:45.001873 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:09:45.001884 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:09:45.001895 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:09:45.001906 | orchestrator | 2026-02-27 01:09:45.001916 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-02-27 01:09:45.001927 | orchestrator | Friday 27 February 2026 01:06:21 +0000 (0:00:02.503) 0:01:48.358 ******* 2026-02-27 01:09:45.001964 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:09:45.001976 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:09:45.001987 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:09:45.001998 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:09:45.002009 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:09:45.002076 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:09:45.002087 | orchestrator | 2026-02-27 01:09:45.002099 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-02-27 01:09:45.002110 | orchestrator | Friday 27 February 2026 01:06:23 +0000 (0:00:02.025) 0:01:50.384 ******* 2026-02-27 01:09:45.002121 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:09:45.002133 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:09:45.002144 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:09:45.002155 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:09:45.002166 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:09:45.002177 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:09:45.002197 | orchestrator | 2026-02-27 01:09:45.002208 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-02-27 01:09:45.002219 | orchestrator | Friday 27 February 2026 01:06:25 +0000 (0:00:02.110) 0:01:52.494 ******* 2026-02-27 01:09:45.002230 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:09:45.002246 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:09:45.002257 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:09:45.002268 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:09:45.002279 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:09:45.002290 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:09:45.002301 | orchestrator | 2026-02-27 01:09:45.002312 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-02-27 01:09:45.002323 | orchestrator | Friday 27 February 2026 01:06:28 +0000 (0:00:02.170) 0:01:54.664 ******* 2026-02-27 01:09:45.002334 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:09:45.002345 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:09:45.002356 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:09:45.002366 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:09:45.002377 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:09:45.002388 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:09:45.002399 | orchestrator | 2026-02-27 01:09:45.002410 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-02-27 01:09:45.002421 | orchestrator | Friday 27 February 2026 01:06:30 +0000 (0:00:02.252) 0:01:56.916 ******* 2026-02-27 01:09:45.002432 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-27 01:09:45.002443 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:09:45.002455 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-27 01:09:45.002465 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:09:45.002476 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-27 01:09:45.002487 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:09:45.002499 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-27 01:09:45.002510 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:09:45.002520 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-27 01:09:45.002531 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:09:45.002542 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-27 01:09:45.002553 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:09:45.002564 | orchestrator | 2026-02-27 01:09:45.002575 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-02-27 01:09:45.002586 | orchestrator | Friday 27 February 2026 01:06:32 +0000 (0:00:02.195) 0:01:59.112 ******* 2026-02-27 01:09:45.002598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-27 01:09:45.002609 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:09:45.002630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-27 01:09:45.002648 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:09:45.002664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-27 01:09:45.002676 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:09:45.002687 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-27 01:09:45.002699 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:09:45.002710 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-27 01:09:45.002722 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:09:45.002733 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-27 01:09:45.002750 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:09:45.002761 | orchestrator | 2026-02-27 01:09:45.002773 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-02-27 01:09:45.002784 | orchestrator | Friday 27 February 2026 01:06:34 +0000 (0:00:02.396) 0:02:01.508 ******* 2026-02-27 01:09:45.002805 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-27 01:09:45.002817 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:09:45.002834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-27 01:09:45.002845 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:09:45.002857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-27 01:09:45.002869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-27 01:09:45.002887 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:09:45.002898 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:09:45.002915 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-27 01:09:45.002927 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:09:45.002973 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-27 01:09:45.002985 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:09:45.002996 | orchestrator | 2026-02-27 01:09:45.003008 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-02-27 01:09:45.003019 | orchestrator | Friday 27 February 2026 01:06:36 +0000 (0:00:02.132) 0:02:03.641 ******* 2026-02-27 01:09:45.003030 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:09:45.003041 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:09:45.003051 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:09:45.003062 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:09:45.003073 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:09:45.003091 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:09:45.003111 | orchestrator | 2026-02-27 01:09:45.003139 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-02-27 01:09:45.003161 | orchestrator | Friday 27 February 2026 01:06:40 +0000 (0:00:03.250) 0:02:06.892 ******* 2026-02-27 01:09:45.003179 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:09:45.003197 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:09:45.003215 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:09:45.003232 | orchestrator | changed: [testbed-node-3] 2026-02-27 01:09:45.003251 | orchestrator | changed: [testbed-node-4] 2026-02-27 01:09:45.003269 | orchestrator | changed: [testbed-node-5] 2026-02-27 01:09:45.003288 | orchestrator | 2026-02-27 01:09:45.003307 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-02-27 01:09:45.003327 | orchestrator | Friday 27 February 2026 01:06:44 +0000 (0:00:04.308) 0:02:11.200 ******* 2026-02-27 01:09:45.003345 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:09:45.003365 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:09:45.003376 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:09:45.003387 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:09:45.003398 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:09:45.003408 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:09:45.003429 | orchestrator | 2026-02-27 01:09:45.003441 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-02-27 01:09:45.003452 | orchestrator | Friday 27 February 2026 01:06:48 +0000 (0:00:03.506) 0:02:14.706 ******* 2026-02-27 01:09:45.003463 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:09:45.003474 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:09:45.003485 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:09:45.003495 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:09:45.003506 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:09:45.003517 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:09:45.003527 | orchestrator | 2026-02-27 01:09:45.003538 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-02-27 01:09:45.003549 | orchestrator | Friday 27 February 2026 01:06:51 +0000 (0:00:03.360) 0:02:18.067 ******* 2026-02-27 01:09:45.003560 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:09:45.003570 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:09:45.003581 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:09:45.003592 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:09:45.003603 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:09:45.003613 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:09:45.003624 | orchestrator | 2026-02-27 01:09:45.003635 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-02-27 01:09:45.003646 | orchestrator | Friday 27 February 2026 01:06:53 +0000 (0:00:02.353) 0:02:20.421 ******* 2026-02-27 01:09:45.003657 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:09:45.003668 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:09:45.003678 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:09:45.003689 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:09:45.003700 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:09:45.003710 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:09:45.003721 | orchestrator | 2026-02-27 01:09:45.003732 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-02-27 01:09:45.003743 | orchestrator | Friday 27 February 2026 01:06:55 +0000 (0:00:01.889) 0:02:22.310 ******* 2026-02-27 01:09:45.003754 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:09:45.003764 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:09:45.003775 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:09:45.003786 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:09:45.003797 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:09:45.003808 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:09:45.003818 | orchestrator | 2026-02-27 01:09:45.003829 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-02-27 01:09:45.003840 | orchestrator | Friday 27 February 2026 01:06:57 +0000 (0:00:02.006) 0:02:24.317 ******* 2026-02-27 01:09:45.003851 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:09:45.003862 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:09:45.003872 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:09:45.003883 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:09:45.003894 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:09:45.003904 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:09:45.003916 | orchestrator | 2026-02-27 01:09:45.003927 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-02-27 01:09:45.004003 | orchestrator | Friday 27 February 2026 01:07:02 +0000 (0:00:05.028) 0:02:29.346 ******* 2026-02-27 01:09:45.004015 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:09:45.004026 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:09:45.004037 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:09:45.004048 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:09:45.004059 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:09:45.004070 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:09:45.004081 | orchestrator | 2026-02-27 01:09:45.004092 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-02-27 01:09:45.004103 | orchestrator | Friday 27 February 2026 01:07:05 +0000 (0:00:03.001) 0:02:32.347 ******* 2026-02-27 01:09:45.004123 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-27 01:09:45.004136 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:09:45.004147 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-27 01:09:45.004158 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:09:45.004176 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-27 01:09:45.004187 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:09:45.004199 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-27 01:09:45.004211 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:09:45.004222 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-27 01:09:45.004233 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:09:45.004244 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-27 01:09:45.004255 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:09:45.004266 | orchestrator | 2026-02-27 01:09:45.004277 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-02-27 01:09:45.004288 | orchestrator | Friday 27 February 2026 01:07:08 +0000 (0:00:02.635) 0:02:34.982 ******* 2026-02-27 01:09:45.004300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-27 01:09:45.004310 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:09:45.004321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-27 01:09:45.004331 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:09:45.004348 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-27 01:09:45.004365 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:09:45.004380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-27 01:09:45.004391 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:09:45.004401 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-27 01:09:45.004411 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:09:45.004421 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-27 01:09:45.004431 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:09:45.004441 | orchestrator | 2026-02-27 01:09:45.004451 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-02-27 01:09:45.004461 | orchestrator | Friday 27 February 2026 01:07:11 +0000 (0:00:03.074) 0:02:38.056 ******* 2026-02-27 01:09:45.004471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-27 01:09:45.004494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-27 01:09:45.004513 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-27 01:09:45.004525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-27 01:09:45.004536 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-27 01:09:45.004546 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-27 01:09:45.004561 | orchestrator | 2026-02-27 01:09:45.004571 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-27 01:09:45.004581 | orchestrator | Friday 27 February 2026 01:07:17 +0000 (0:00:05.961) 0:02:44.018 ******* 2026-02-27 01:09:45.004591 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:09:45.004601 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:09:45.004611 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:09:45.004621 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:09:45.004631 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:09:45.004646 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:09:45.004657 | orchestrator | 2026-02-27 01:09:45.004667 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-02-27 01:09:45.004676 | orchestrator | Friday 27 February 2026 01:07:18 +0000 (0:00:00.828) 0:02:44.847 ******* 2026-02-27 01:09:45.004686 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:09:45.004696 | orchestrator | 2026-02-27 01:09:45.004706 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-02-27 01:09:45.004716 | orchestrator | Friday 27 February 2026 01:07:20 +0000 (0:00:02.421) 0:02:47.268 ******* 2026-02-27 01:09:45.004726 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:09:45.004735 | orchestrator | 2026-02-27 01:09:45.004745 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-02-27 01:09:45.004755 | orchestrator | Friday 27 February 2026 01:07:23 +0000 (0:00:02.443) 0:02:49.712 ******* 2026-02-27 01:09:45.004764 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:09:45.004774 | orchestrator | 2026-02-27 01:09:45.004784 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-27 01:09:45.004794 | orchestrator | Friday 27 February 2026 01:08:10 +0000 (0:00:47.225) 0:03:36.937 ******* 2026-02-27 01:09:45.004804 | orchestrator | 2026-02-27 01:09:45.004818 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-27 01:09:45.004829 | orchestrator | Friday 27 February 2026 01:08:10 +0000 (0:00:00.230) 0:03:37.168 ******* 2026-02-27 01:09:45.004839 | orchestrator | 2026-02-27 01:09:45.004848 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-27 01:09:45.004858 | orchestrator | Friday 27 February 2026 01:08:11 +0000 (0:00:00.922) 0:03:38.090 ******* 2026-02-27 01:09:45.004867 | orchestrator | 2026-02-27 01:09:45.004877 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-27 01:09:45.004887 | orchestrator | Friday 27 February 2026 01:08:11 +0000 (0:00:00.243) 0:03:38.333 ******* 2026-02-27 01:09:45.004897 | orchestrator | 2026-02-27 01:09:45.004907 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-27 01:09:45.004917 | orchestrator | Friday 27 February 2026 01:08:11 +0000 (0:00:00.115) 0:03:38.449 ******* 2026-02-27 01:09:45.004927 | orchestrator | 2026-02-27 01:09:45.004954 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-27 01:09:45.004965 | orchestrator | Friday 27 February 2026 01:08:11 +0000 (0:00:00.078) 0:03:38.528 ******* 2026-02-27 01:09:45.004975 | orchestrator | 2026-02-27 01:09:45.004984 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-02-27 01:09:45.004994 | orchestrator | Friday 27 February 2026 01:08:11 +0000 (0:00:00.071) 0:03:38.600 ******* 2026-02-27 01:09:45.005004 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:09:45.005014 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:09:45.005024 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:09:45.005034 | orchestrator | 2026-02-27 01:09:45.005043 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-02-27 01:09:45.005059 | orchestrator | Friday 27 February 2026 01:08:46 +0000 (0:00:34.100) 0:04:12.701 ******* 2026-02-27 01:09:45.005091 | orchestrator | changed: [testbed-node-3] 2026-02-27 01:09:45.005112 | orchestrator | changed: [testbed-node-4] 2026-02-27 01:09:45.005128 | orchestrator | changed: [testbed-node-5] 2026-02-27 01:09:45.005144 | orchestrator | 2026-02-27 01:09:45.005159 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 01:09:45.005175 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-27 01:09:45.005190 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-02-27 01:09:45.005204 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-02-27 01:09:45.005220 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-27 01:09:45.005237 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-27 01:09:45.005254 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-27 01:09:45.005270 | orchestrator | 2026-02-27 01:09:45.005288 | orchestrator | 2026-02-27 01:09:45.005305 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 01:09:45.005321 | orchestrator | Friday 27 February 2026 01:09:42 +0000 (0:00:56.848) 0:05:09.550 ******* 2026-02-27 01:09:45.005339 | orchestrator | =============================================================================== 2026-02-27 01:09:45.005357 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 56.85s 2026-02-27 01:09:45.005373 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 47.23s 2026-02-27 01:09:45.005387 | orchestrator | neutron : Restart neutron-server container ----------------------------- 34.10s 2026-02-27 01:09:45.005397 | orchestrator | neutron : Copying over neutron.conf ------------------------------------ 10.69s 2026-02-27 01:09:45.005407 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.13s 2026-02-27 01:09:45.005416 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.86s 2026-02-27 01:09:45.005426 | orchestrator | neutron : Copying over config.json files for services ------------------- 6.51s 2026-02-27 01:09:45.005436 | orchestrator | neutron : Check neutron containers -------------------------------------- 5.96s 2026-02-27 01:09:45.005456 | orchestrator | neutron : Copy neutron-l3-agent-wrapper script -------------------------- 5.03s 2026-02-27 01:09:45.005466 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.75s 2026-02-27 01:09:45.005476 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 4.53s 2026-02-27 01:09:45.005486 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.48s 2026-02-27 01:09:45.005496 | orchestrator | neutron : Copying over existing policy file ----------------------------- 4.34s 2026-02-27 01:09:45.005506 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.31s 2026-02-27 01:09:45.005515 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.05s 2026-02-27 01:09:45.005525 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.95s 2026-02-27 01:09:45.005535 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.83s 2026-02-27 01:09:45.005544 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 3.82s 2026-02-27 01:09:45.005561 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 3.78s 2026-02-27 01:09:45.005571 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.64s 2026-02-27 01:09:45.005581 | orchestrator | 2026-02-27 01:09:44 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:09:48.039753 | orchestrator | 2026-02-27 01:09:48 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:09:48.040601 | orchestrator | 2026-02-27 01:09:48 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:09:48.041261 | orchestrator | 2026-02-27 01:09:48 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:09:48.043601 | orchestrator | 2026-02-27 01:09:48 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:09:48.043629 | orchestrator | 2026-02-27 01:09:48 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:09:51.079197 | orchestrator | 2026-02-27 01:09:51 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:09:51.082495 | orchestrator | 2026-02-27 01:09:51 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:09:51.083125 | orchestrator | 2026-02-27 01:09:51 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:09:51.084073 | orchestrator | 2026-02-27 01:09:51 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:09:51.084120 | orchestrator | 2026-02-27 01:09:51 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:09:54.114588 | orchestrator | 2026-02-27 01:09:54 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:09:54.115262 | orchestrator | 2026-02-27 01:09:54 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:09:54.116371 | orchestrator | 2026-02-27 01:09:54 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:09:54.117304 | orchestrator | 2026-02-27 01:09:54 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:09:54.117350 | orchestrator | 2026-02-27 01:09:54 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:09:57.148496 | orchestrator | 2026-02-27 01:09:57 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:09:57.149013 | orchestrator | 2026-02-27 01:09:57 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:09:57.149746 | orchestrator | 2026-02-27 01:09:57 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:09:57.151558 | orchestrator | 2026-02-27 01:09:57 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:09:57.151634 | orchestrator | 2026-02-27 01:09:57 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:10:00.186660 | orchestrator | 2026-02-27 01:10:00 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:10:00.187297 | orchestrator | 2026-02-27 01:10:00 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:10:00.188533 | orchestrator | 2026-02-27 01:10:00 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:10:00.189350 | orchestrator | 2026-02-27 01:10:00 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:10:00.189496 | orchestrator | 2026-02-27 01:10:00 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:10:03.219509 | orchestrator | 2026-02-27 01:10:03 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:10:03.220350 | orchestrator | 2026-02-27 01:10:03 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:10:03.221756 | orchestrator | 2026-02-27 01:10:03 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:10:03.222812 | orchestrator | 2026-02-27 01:10:03 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:10:03.223483 | orchestrator | 2026-02-27 01:10:03 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:10:06.276533 | orchestrator | 2026-02-27 01:10:06 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:10:06.277274 | orchestrator | 2026-02-27 01:10:06 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:10:06.278260 | orchestrator | 2026-02-27 01:10:06 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:10:06.282420 | orchestrator | 2026-02-27 01:10:06 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:10:06.282702 | orchestrator | 2026-02-27 01:10:06 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:10:09.313030 | orchestrator | 2026-02-27 01:10:09 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:10:09.313868 | orchestrator | 2026-02-27 01:10:09 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:10:09.315195 | orchestrator | 2026-02-27 01:10:09 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:10:09.317101 | orchestrator | 2026-02-27 01:10:09 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:10:09.317442 | orchestrator | 2026-02-27 01:10:09 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:10:12.362358 | orchestrator | 2026-02-27 01:10:12 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:10:12.363691 | orchestrator | 2026-02-27 01:10:12 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:10:12.366212 | orchestrator | 2026-02-27 01:10:12 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:10:12.367471 | orchestrator | 2026-02-27 01:10:12 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:10:12.368244 | orchestrator | 2026-02-27 01:10:12 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:10:15.412467 | orchestrator | 2026-02-27 01:10:15 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:10:15.413935 | orchestrator | 2026-02-27 01:10:15 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:10:15.415105 | orchestrator | 2026-02-27 01:10:15 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:10:15.417350 | orchestrator | 2026-02-27 01:10:15 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:10:15.418475 | orchestrator | 2026-02-27 01:10:15 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:10:18.459469 | orchestrator | 2026-02-27 01:10:18 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:10:18.460371 | orchestrator | 2026-02-27 01:10:18 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:10:18.461432 | orchestrator | 2026-02-27 01:10:18 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:10:18.463376 | orchestrator | 2026-02-27 01:10:18 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:10:18.463892 | orchestrator | 2026-02-27 01:10:18 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:10:21.514226 | orchestrator | 2026-02-27 01:10:21 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:10:21.516025 | orchestrator | 2026-02-27 01:10:21 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:10:21.516652 | orchestrator | 2026-02-27 01:10:21 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:10:21.519077 | orchestrator | 2026-02-27 01:10:21 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:10:21.519115 | orchestrator | 2026-02-27 01:10:21 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:10:24.573940 | orchestrator | 2026-02-27 01:10:24 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:10:24.574081 | orchestrator | 2026-02-27 01:10:24 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:10:24.574091 | orchestrator | 2026-02-27 01:10:24 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:10:24.574098 | orchestrator | 2026-02-27 01:10:24 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:10:24.574105 | orchestrator | 2026-02-27 01:10:24 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:10:27.631717 | orchestrator | 2026-02-27 01:10:27 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:10:27.632644 | orchestrator | 2026-02-27 01:10:27 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:10:27.635052 | orchestrator | 2026-02-27 01:10:27 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:10:27.635113 | orchestrator | 2026-02-27 01:10:27 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:10:27.636521 | orchestrator | 2026-02-27 01:10:27 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:10:30.670293 | orchestrator | 2026-02-27 01:10:30 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:10:30.672055 | orchestrator | 2026-02-27 01:10:30 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:10:30.672869 | orchestrator | 2026-02-27 01:10:30 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:10:30.673623 | orchestrator | 2026-02-27 01:10:30 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:10:30.673670 | orchestrator | 2026-02-27 01:10:30 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:10:33.709649 | orchestrator | 2026-02-27 01:10:33 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:10:33.710107 | orchestrator | 2026-02-27 01:10:33 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:10:33.710954 | orchestrator | 2026-02-27 01:10:33 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:10:33.712092 | orchestrator | 2026-02-27 01:10:33 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:10:33.712175 | orchestrator | 2026-02-27 01:10:33 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:10:36.773217 | orchestrator | 2026-02-27 01:10:36 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:10:36.773688 | orchestrator | 2026-02-27 01:10:36 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:10:36.774668 | orchestrator | 2026-02-27 01:10:36 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:10:36.775880 | orchestrator | 2026-02-27 01:10:36 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:10:36.775944 | orchestrator | 2026-02-27 01:10:36 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:10:39.822176 | orchestrator | 2026-02-27 01:10:39 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:10:39.823556 | orchestrator | 2026-02-27 01:10:39 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:10:39.825619 | orchestrator | 2026-02-27 01:10:39 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:10:39.829435 | orchestrator | 2026-02-27 01:10:39 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:10:39.829662 | orchestrator | 2026-02-27 01:10:39 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:10:42.882936 | orchestrator | 2026-02-27 01:10:42 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:10:42.883248 | orchestrator | 2026-02-27 01:10:42 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:10:42.885345 | orchestrator | 2026-02-27 01:10:42 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:10:42.885861 | orchestrator | 2026-02-27 01:10:42 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:10:42.885894 | orchestrator | 2026-02-27 01:10:42 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:10:45.929151 | orchestrator | 2026-02-27 01:10:45 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:10:45.929771 | orchestrator | 2026-02-27 01:10:45 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:10:45.930627 | orchestrator | 2026-02-27 01:10:45 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:10:45.931791 | orchestrator | 2026-02-27 01:10:45 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:10:45.931830 | orchestrator | 2026-02-27 01:10:45 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:10:48.972350 | orchestrator | 2026-02-27 01:10:48 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:10:48.973574 | orchestrator | 2026-02-27 01:10:48 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:10:48.974129 | orchestrator | 2026-02-27 01:10:48 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:10:48.975155 | orchestrator | 2026-02-27 01:10:48 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:10:48.975285 | orchestrator | 2026-02-27 01:10:48 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:10:52.014866 | orchestrator | 2026-02-27 01:10:52 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:10:52.018247 | orchestrator | 2026-02-27 01:10:52 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:10:52.018819 | orchestrator | 2026-02-27 01:10:52 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:10:52.019939 | orchestrator | 2026-02-27 01:10:52 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:10:52.020043 | orchestrator | 2026-02-27 01:10:52 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:10:55.060315 | orchestrator | 2026-02-27 01:10:55 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:10:55.066814 | orchestrator | 2026-02-27 01:10:55 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:10:55.067460 | orchestrator | 2026-02-27 01:10:55 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:10:55.068565 | orchestrator | 2026-02-27 01:10:55 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:10:55.068615 | orchestrator | 2026-02-27 01:10:55 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:10:58.244159 | orchestrator | 2026-02-27 01:10:58 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:10:58.245229 | orchestrator | 2026-02-27 01:10:58 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:10:58.246173 | orchestrator | 2026-02-27 01:10:58 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:10:58.247628 | orchestrator | 2026-02-27 01:10:58 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:10:58.247670 | orchestrator | 2026-02-27 01:10:58 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:11:01.284521 | orchestrator | 2026-02-27 01:11:01 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:11:01.285294 | orchestrator | 2026-02-27 01:11:01 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:11:01.285903 | orchestrator | 2026-02-27 01:11:01 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:11:01.286884 | orchestrator | 2026-02-27 01:11:01 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:11:01.286918 | orchestrator | 2026-02-27 01:11:01 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:11:04.327221 | orchestrator | 2026-02-27 01:11:04 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:11:04.327435 | orchestrator | 2026-02-27 01:11:04 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:11:04.328264 | orchestrator | 2026-02-27 01:11:04 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:11:04.329059 | orchestrator | 2026-02-27 01:11:04 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:11:04.329093 | orchestrator | 2026-02-27 01:11:04 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:11:07.386597 | orchestrator | 2026-02-27 01:11:07 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:11:07.388109 | orchestrator | 2026-02-27 01:11:07 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:11:07.390173 | orchestrator | 2026-02-27 01:11:07 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:11:07.390924 | orchestrator | 2026-02-27 01:11:07 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:11:07.390956 | orchestrator | 2026-02-27 01:11:07 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:11:10.432337 | orchestrator | 2026-02-27 01:11:10 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:11:10.432911 | orchestrator | 2026-02-27 01:11:10 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:11:10.434270 | orchestrator | 2026-02-27 01:11:10 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:11:10.435417 | orchestrator | 2026-02-27 01:11:10 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:11:10.435487 | orchestrator | 2026-02-27 01:11:10 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:11:13.483848 | orchestrator | 2026-02-27 01:11:13 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:11:13.484552 | orchestrator | 2026-02-27 01:11:13 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:11:13.485623 | orchestrator | 2026-02-27 01:11:13 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:11:13.488450 | orchestrator | 2026-02-27 01:11:13 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:11:13.488484 | orchestrator | 2026-02-27 01:11:13 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:11:16.547076 | orchestrator | 2026-02-27 01:11:16 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:11:16.549244 | orchestrator | 2026-02-27 01:11:16 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:11:16.551632 | orchestrator | 2026-02-27 01:11:16 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:11:16.553442 | orchestrator | 2026-02-27 01:11:16 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:11:16.553737 | orchestrator | 2026-02-27 01:11:16 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:11:19.604789 | orchestrator | 2026-02-27 01:11:19 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:11:19.607651 | orchestrator | 2026-02-27 01:11:19 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:11:19.610650 | orchestrator | 2026-02-27 01:11:19 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:11:19.613254 | orchestrator | 2026-02-27 01:11:19 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:11:19.613389 | orchestrator | 2026-02-27 01:11:19 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:11:22.669643 | orchestrator | 2026-02-27 01:11:22 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:11:22.671343 | orchestrator | 2026-02-27 01:11:22 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:11:22.674544 | orchestrator | 2026-02-27 01:11:22 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:11:22.676886 | orchestrator | 2026-02-27 01:11:22 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:11:22.676953 | orchestrator | 2026-02-27 01:11:22 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:11:25.718261 | orchestrator | 2026-02-27 01:11:25 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:11:25.718400 | orchestrator | 2026-02-27 01:11:25 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:11:25.719075 | orchestrator | 2026-02-27 01:11:25 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:11:25.719921 | orchestrator | 2026-02-27 01:11:25 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:11:25.719965 | orchestrator | 2026-02-27 01:11:25 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:11:28.767350 | orchestrator | 2026-02-27 01:11:28 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:11:28.769671 | orchestrator | 2026-02-27 01:11:28 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:11:28.771662 | orchestrator | 2026-02-27 01:11:28 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:11:28.773075 | orchestrator | 2026-02-27 01:11:28 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:11:28.773135 | orchestrator | 2026-02-27 01:11:28 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:11:31.822307 | orchestrator | 2026-02-27 01:11:31 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:11:31.826548 | orchestrator | 2026-02-27 01:11:31 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:11:31.828527 | orchestrator | 2026-02-27 01:11:31 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:11:31.830936 | orchestrator | 2026-02-27 01:11:31 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:11:31.831152 | orchestrator | 2026-02-27 01:11:31 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:11:34.893386 | orchestrator | 2026-02-27 01:11:34 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:11:34.894116 | orchestrator | 2026-02-27 01:11:34 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:11:34.895079 | orchestrator | 2026-02-27 01:11:34 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:11:34.898570 | orchestrator | 2026-02-27 01:11:34 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:11:34.898813 | orchestrator | 2026-02-27 01:11:34 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:11:37.944735 | orchestrator | 2026-02-27 01:11:37 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:11:37.946771 | orchestrator | 2026-02-27 01:11:37 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:11:37.948287 | orchestrator | 2026-02-27 01:11:37 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:11:37.950353 | orchestrator | 2026-02-27 01:11:37 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:11:37.950430 | orchestrator | 2026-02-27 01:11:37 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:11:41.015945 | orchestrator | 2026-02-27 01:11:41 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:11:41.018301 | orchestrator | 2026-02-27 01:11:41 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:11:41.022974 | orchestrator | 2026-02-27 01:11:41 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:11:41.025869 | orchestrator | 2026-02-27 01:11:41 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:11:41.025910 | orchestrator | 2026-02-27 01:11:41 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:11:44.070579 | orchestrator | 2026-02-27 01:11:44 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:11:44.071107 | orchestrator | 2026-02-27 01:11:44 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:11:44.072374 | orchestrator | 2026-02-27 01:11:44 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:11:44.073437 | orchestrator | 2026-02-27 01:11:44 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:11:44.073474 | orchestrator | 2026-02-27 01:11:44 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:11:47.124193 | orchestrator | 2026-02-27 01:11:47 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:11:47.124774 | orchestrator | 2026-02-27 01:11:47 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:11:47.125724 | orchestrator | 2026-02-27 01:11:47 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:11:47.126897 | orchestrator | 2026-02-27 01:11:47 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:11:47.126948 | orchestrator | 2026-02-27 01:11:47 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:11:50.180475 | orchestrator | 2026-02-27 01:11:50 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:11:50.181889 | orchestrator | 2026-02-27 01:11:50 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:11:50.182912 | orchestrator | 2026-02-27 01:11:50 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:11:50.184851 | orchestrator | 2026-02-27 01:11:50 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:11:50.185101 | orchestrator | 2026-02-27 01:11:50 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:11:53.227260 | orchestrator | 2026-02-27 01:11:53 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:11:53.230194 | orchestrator | 2026-02-27 01:11:53 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:11:53.232864 | orchestrator | 2026-02-27 01:11:53 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:11:53.235351 | orchestrator | 2026-02-27 01:11:53 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:11:53.235386 | orchestrator | 2026-02-27 01:11:53 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:11:56.271154 | orchestrator | 2026-02-27 01:11:56 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:11:56.271432 | orchestrator | 2026-02-27 01:11:56 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state STARTED 2026-02-27 01:11:56.272901 | orchestrator | 2026-02-27 01:11:56 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:11:56.273905 | orchestrator | 2026-02-27 01:11:56 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:11:56.273948 | orchestrator | 2026-02-27 01:11:56 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:11:59.309251 | orchestrator | 2026-02-27 01:11:59 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:11:59.312463 | orchestrator | 2026-02-27 01:11:59 | INFO  | Task eabcc360-1c9a-4273-9f9c-0059b6ce126d is in state SUCCESS 2026-02-27 01:11:59.314596 | orchestrator | 2026-02-27 01:11:59.314686 | orchestrator | 2026-02-27 01:11:59.314703 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-27 01:11:59.314716 | orchestrator | 2026-02-27 01:11:59.314728 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-27 01:11:59.314740 | orchestrator | Friday 27 February 2026 01:08:06 +0000 (0:00:00.421) 0:00:00.421 ******* 2026-02-27 01:11:59.314751 | orchestrator | ok: [testbed-manager] 2026-02-27 01:11:59.314765 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:11:59.314777 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:11:59.314789 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:11:59.314800 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:11:59.314815 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:11:59.314864 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:11:59.314889 | orchestrator | 2026-02-27 01:11:59.314912 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-27 01:11:59.314930 | orchestrator | Friday 27 February 2026 01:08:07 +0000 (0:00:00.882) 0:00:01.303 ******* 2026-02-27 01:11:59.314949 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-02-27 01:11:59.314968 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-02-27 01:11:59.314986 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-02-27 01:11:59.315209 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-02-27 01:11:59.315226 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-02-27 01:11:59.315239 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-02-27 01:11:59.315251 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-02-27 01:11:59.315264 | orchestrator | 2026-02-27 01:11:59.315278 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-02-27 01:11:59.315291 | orchestrator | 2026-02-27 01:11:59.315304 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-02-27 01:11:59.315317 | orchestrator | Friday 27 February 2026 01:08:07 +0000 (0:00:00.749) 0:00:02.053 ******* 2026-02-27 01:11:59.315332 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-27 01:11:59.315346 | orchestrator | 2026-02-27 01:11:59.315358 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-02-27 01:11:59.315371 | orchestrator | Friday 27 February 2026 01:08:09 +0000 (0:00:01.812) 0:00:03.865 ******* 2026-02-27 01:11:59.315578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-27 01:11:59.315597 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-27 01:11:59.315612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-27 01:11:59.315626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 01:11:59.315711 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-27 01:11:59.315737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-27 01:11:59.315749 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-27 01:11:59.315760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 01:11:59.315772 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-27 01:11:59.315783 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-27 01:11:59.315802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 01:11:59.315823 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-27 01:11:59.315836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 01:11:59.315855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 01:11:59.315866 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-27 01:11:59.315879 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-27 01:11:59.315890 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-27 01:11:59.315901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-27 01:11:59.315928 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-27 01:11:59.315950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 01:11:59.315962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-27 01:11:59.315974 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-27 01:11:59.315986 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-27 01:11:59.316031 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-27 01:11:59.316045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 01:11:59.316062 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 01:11:59.316089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-27 01:11:59.316102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 01:11:59.316114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 01:11:59.316125 | orchestrator | 2026-02-27 01:11:59.316158 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-02-27 01:11:59.316169 | orchestrator | Friday 27 February 2026 01:08:13 +0000 (0:00:04.201) 0:00:08.066 ******* 2026-02-27 01:11:59.316180 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-27 01:11:59.316191 | orchestrator | 2026-02-27 01:11:59.316202 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-02-27 01:11:59.316213 | orchestrator | Friday 27 February 2026 01:08:17 +0000 (0:00:03.328) 0:00:11.395 ******* 2026-02-27 01:11:59.316326 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-27 01:11:59.316339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-27 01:11:59.316379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-27 01:11:59.316499 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-27 01:11:59.316586 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-27 01:11:59.316599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-27 01:11:59.316611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 01:11:59.316623 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-27 01:11:59.316634 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-27 01:11:59.316655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 01:11:59.316706 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-27 01:11:59.316742 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-27 01:11:59.316761 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-27 01:11:59.316780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 01:11:59.316796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 01:11:59.316841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 01:11:59.316862 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-27 01:11:59.316924 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-27 01:11:59.316969 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-27 01:11:59.317058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-27 01:11:59.317084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 01:11:59.317105 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-27 01:11:59.317127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-27 01:11:59.317147 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-27 01:11:59.317189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-27 01:11:59.318399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 01:11:59.318462 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 01:11:59.318484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 01:11:59.318503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 01:11:59.318523 | orchestrator | 2026-02-27 01:11:59.318543 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-02-27 01:11:59.318563 | orchestrator | Friday 27 February 2026 01:08:26 +0000 (0:00:09.560) 0:00:20.956 ******* 2026-02-27 01:11:59.318586 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-27 01:11:59.318625 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-27 01:11:59.318653 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-27 01:11:59.318682 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-27 01:11:59.318695 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 01:11:59.318707 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-27 01:11:59.318718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-27 01:11:59.318730 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-27 01:11:59.318748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-27 01:11:59.318765 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-27 01:11:59.318785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 01:11:59.318797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 01:11:59.318808 | orchestrator | skipping: [testbed-manager] 2026-02-27 01:11:59.318820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 01:11:59.318832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-27 01:11:59.318843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 01:11:59.318861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-27 01:11:59.318872 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:11:59.318888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 01:11:59.318900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 01:11:59.318921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-27 01:11:59.318935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 01:11:59.318948 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:11:59.318961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 01:11:59.318974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-27 01:11:59.318993 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:11:59.319037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 01:11:59.319050 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:11:59.319062 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-27 01:11:59.319080 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-27 01:11:59.319099 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-27 01:11:59.319112 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:11:59.319125 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-27 01:11:59.319138 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-27 01:11:59.319151 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-27 01:11:59.319171 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:11:59.319183 | orchestrator | 2026-02-27 01:11:59.319195 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-02-27 01:11:59.319208 | orchestrator | Friday 27 February 2026 01:08:29 +0000 (0:00:03.055) 0:00:24.011 ******* 2026-02-27 01:11:59.319222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-27 01:11:59.319235 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-27 01:11:59.319254 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-27 01:11:59.319274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 01:11:59.319286 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-27 01:11:59.319298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 01:11:59.319317 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-27 01:11:59.319329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-27 01:11:59.319347 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 01:11:59.319359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 01:11:59.319375 | orchestrator | skipping: [testbed-manager] 2026-02-27 01:11:59.319387 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:11:59.319398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-27 01:11:59.319410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 01:11:59.319433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 01:11:59.319444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-27 01:11:59.319456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 01:11:59.319467 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:11:59.319478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-27 01:11:59.319515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 01:11:59.319534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 01:11:59.319546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-27 01:11:59.319558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-27 01:11:59.319578 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:11:59.319589 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-27 01:11:59.319601 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-27 01:11:59.319612 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-27 01:11:59.319623 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:11:59.319640 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-27 01:11:59.319652 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-27 01:11:59.319669 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-27 01:11:59.319681 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-27 01:11:59.319699 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-27 01:11:59.319710 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-27 01:11:59.319721 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:11:59.319732 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:11:59.319744 | orchestrator | 2026-02-27 01:11:59.319755 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-02-27 01:11:59.319766 | orchestrator | Friday 27 February 2026 01:08:32 +0000 (0:00:02.211) 0:00:26.223 ******* 2026-02-27 01:11:59.319777 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-27 01:11:59.319794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-27 01:11:59.319813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-27 01:11:59.319825 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-27 01:11:59.319843 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-27 01:11:59.319854 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 01:11:59.319865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-27 01:11:59.319877 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-27 01:11:59.319888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 01:11:59.319905 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-27 01:11:59.319923 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-27 01:11:59.319942 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-27 01:11:59.319953 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-27 01:11:59.319965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 01:11:59.319976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 01:11:59.319987 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-27 01:11:59.320031 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 01:11:59.320050 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-27 01:11:59.320070 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-27 01:11:59.320082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-27 01:11:59.320093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 01:11:59.320105 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 01:11:59.320117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-27 01:11:59.320133 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-27 01:11:59.320145 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-27 01:11:59.320170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 01:11:59.320194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-27 01:11:59.320206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 01:11:59.320227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 01:11:59.320239 | orchestrator | 2026-02-27 01:11:59.320250 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-02-27 01:11:59.320261 | orchestrator | Friday 27 February 2026 01:08:39 +0000 (0:00:07.125) 0:00:33.348 ******* 2026-02-27 01:11:59.320273 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-27 01:11:59.320284 | orchestrator | 2026-02-27 01:11:59.320295 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-02-27 01:11:59.320306 | orchestrator | Friday 27 February 2026 01:08:40 +0000 (0:00:01.145) 0:00:34.493 ******* 2026-02-27 01:11:59.320317 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1088682, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2764027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.320335 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1088682, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2764027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.320359 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1088682, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2764027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.320371 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1088703, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2815573, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.320382 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1088703, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2815573, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.320394 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1088703, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2815573, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.320405 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1088682, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2764027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.320416 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1088682, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2764027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.320439 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1088682, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2764027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-27 01:11:59.320484 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1088673, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2758806, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.320509 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1088703, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2815573, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.320526 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1088682, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2764027, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.320545 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1088673, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2758806, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.320562 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1088703, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2815573, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.320581 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1088673, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2758806, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.320625 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1088691, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.278511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.320652 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1088703, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2815573, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.320672 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1088691, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.278511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.320690 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1088673, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2758806, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.320711 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1088673, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2758806, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.320731 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1088703, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2815573, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-27 01:11:59.320751 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1088691, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.278511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.320792 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1088662, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2728477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.321357 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1088691, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.278511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.321382 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1088662, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2728477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.321394 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1088673, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2758806, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.321405 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1088684, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.276814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.321416 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1088662, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2728477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.321428 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1088662, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2728477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.321456 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1088684, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.276814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.321475 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1088691, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.278511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.321487 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1088691, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.278511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.321499 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1088662, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2728477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.321510 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1088684, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.276814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.321521 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1088684, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.276814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.321533 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1088662, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2728477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.321556 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1088684, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.276814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.321568 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1088688, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.278511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.321585 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1088688, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.278511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.321597 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1088688, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.278511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.321608 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1088673, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2758806, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-27 01:11:59.321619 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1088688, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.278511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.321637 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1088684, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.276814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.321652 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1088685, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.277045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.321664 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1088685, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.277045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.321681 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1088688, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.278511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.321693 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1088685, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.277045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.321704 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1088681, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2761517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.321715 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1088685, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.277045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.321734 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1088688, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.278511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.321745 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1088681, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2761517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.321761 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1088685, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.277045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.321779 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1088691, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.278511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-27 01:11:59.321791 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1088681, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2761517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.321802 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088701, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2814736, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.321813 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088701, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2814736, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.321831 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1088681, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2761517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.321843 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1088685, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.277045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.321865 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1088681, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2761517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.321898 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088656, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.27198, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.321926 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1088681, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2761517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.321945 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088701, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2814736, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.321978 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088656, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.27198, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.322089 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088701, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2814736, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.322117 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1088720, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.284511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.322148 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1088662, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2728477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-27 01:11:59.322180 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088701, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2814736, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.322195 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1088699, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2811525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.322208 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1088720, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.284511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.322230 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088656, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.27198, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.322243 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088656, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.27198, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.322256 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088701, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2814736, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.322274 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088667, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.273875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.322312 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088656, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.27198, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.322326 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1088699, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2811525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.322339 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1088659, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2722433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.322358 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1088720, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.284511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.322369 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1088684, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.276814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-27 01:11:59.322380 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088656, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.27198, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.322397 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1088720, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.284511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.322416 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1088720, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.284511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.322428 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088667, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.273875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.322440 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1088699, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2811525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.322459 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1088687, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2778769, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.322470 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1088699, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2811525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.322481 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1088720, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.284511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.322498 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1088699, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2811525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.322516 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1088659, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2722433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.322528 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1088688, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.278511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-27 01:11:59.322545 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088667, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.273875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.322557 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1088686, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2771938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.322568 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088667, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.273875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.322580 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1088687, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2778769, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.322595 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1088699, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2811525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.322615 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088667, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.273875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.322627 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1088659, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2722433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.322644 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1088659, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2722433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.322656 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1088714, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2828035, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.322667 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:11:59.322679 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1088685, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.277045, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-27 01:11:59.322690 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1088659, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2722433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.322707 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088667, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.273875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.322724 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1088686, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2771938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.322736 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1088687, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2778769, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.322754 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1088687, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2778769, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.322765 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1088687, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2778769, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.322777 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1088659, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2722433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.322789 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1088714, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2828035, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.322800 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:11:59.322815 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1088686, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2771938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.322832 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1088686, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2771938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.322893 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1088686, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2771938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.322905 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1088681, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2761517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-27 01:11:59.322917 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1088714, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2828035, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.322928 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:11:59.322939 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1088687, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2778769, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.322951 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1088714, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2828035, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.322968 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1088714, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2828035, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.322979 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:11:59.322990 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:11:59.323032 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1088686, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2771938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.323053 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088701, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2814736, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-27 01:11:59.323065 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1088714, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2828035, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-27 01:11:59.323076 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:11:59.323087 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088656, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.27198, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-27 01:11:59.323099 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1088720, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.284511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-27 01:11:59.323110 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1088699, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2811525, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-27 01:11:59.323126 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088667, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.273875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-27 01:11:59.323150 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1088659, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2722433, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-27 01:11:59.323162 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1088687, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2778769, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-27 01:11:59.323174 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1088686, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2771938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-27 01:11:59.323185 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1088714, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2828035, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-27 01:11:59.323197 | orchestrator | 2026-02-27 01:11:59.323208 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-02-27 01:11:59.323220 | orchestrator | Friday 27 February 2026 01:09:13 +0000 (0:00:33.456) 0:01:07.950 ******* 2026-02-27 01:11:59.323231 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-27 01:11:59.323242 | orchestrator | 2026-02-27 01:11:59.323253 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-02-27 01:11:59.323265 | orchestrator | Friday 27 February 2026 01:09:14 +0000 (0:00:00.850) 0:01:08.801 ******* 2026-02-27 01:11:59.323276 | orchestrator | [WARNING]: Skipped 2026-02-27 01:11:59.323288 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-27 01:11:59.323299 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-02-27 01:11:59.323311 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-27 01:11:59.323321 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-02-27 01:11:59.323332 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-27 01:11:59.323343 | orchestrator | [WARNING]: Skipped 2026-02-27 01:11:59.323354 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-27 01:11:59.323365 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-02-27 01:11:59.323376 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-27 01:11:59.323394 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-02-27 01:11:59.323405 | orchestrator | [WARNING]: Skipped 2026-02-27 01:11:59.323416 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-27 01:11:59.323427 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-02-27 01:11:59.323443 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-27 01:11:59.323454 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-02-27 01:11:59.323464 | orchestrator | [WARNING]: Skipped 2026-02-27 01:11:59.323475 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-27 01:11:59.323486 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-02-27 01:11:59.323497 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-27 01:11:59.323507 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-02-27 01:11:59.323518 | orchestrator | [WARNING]: Skipped 2026-02-27 01:11:59.323529 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-27 01:11:59.323540 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-02-27 01:11:59.323556 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-27 01:11:59.323568 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-02-27 01:11:59.323578 | orchestrator | [WARNING]: Skipped 2026-02-27 01:11:59.323589 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-27 01:11:59.323600 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-02-27 01:11:59.323610 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-27 01:11:59.323621 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-02-27 01:11:59.323632 | orchestrator | [WARNING]: Skipped 2026-02-27 01:11:59.323643 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-27 01:11:59.323653 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-02-27 01:11:59.323664 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-27 01:11:59.323675 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-02-27 01:11:59.323686 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-27 01:11:59.323697 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-27 01:11:59.323708 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-27 01:11:59.323718 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-27 01:11:59.323729 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-27 01:11:59.323740 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-27 01:11:59.323751 | orchestrator | 2026-02-27 01:11:59.323761 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-02-27 01:11:59.323772 | orchestrator | Friday 27 February 2026 01:09:18 +0000 (0:00:03.358) 0:01:12.160 ******* 2026-02-27 01:11:59.323784 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-27 01:11:59.323795 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-27 01:11:59.323806 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:11:59.323817 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:11:59.323828 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-27 01:11:59.323839 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:11:59.323849 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-27 01:11:59.323860 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:11:59.323871 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-27 01:11:59.323882 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:11:59.323900 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-27 01:11:59.323911 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:11:59.323922 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-02-27 01:11:59.323933 | orchestrator | 2026-02-27 01:11:59.323944 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-02-27 01:11:59.323955 | orchestrator | Friday 27 February 2026 01:09:38 +0000 (0:00:20.060) 0:01:32.220 ******* 2026-02-27 01:11:59.323965 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-27 01:11:59.323977 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:11:59.323987 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-27 01:11:59.324019 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:11:59.324030 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-27 01:11:59.324042 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:11:59.324053 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-27 01:11:59.324064 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:11:59.324074 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-27 01:11:59.324085 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:11:59.324096 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-27 01:11:59.324106 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:11:59.324117 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-02-27 01:11:59.324128 | orchestrator | 2026-02-27 01:11:59.324139 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-02-27 01:11:59.324150 | orchestrator | Friday 27 February 2026 01:09:42 +0000 (0:00:03.934) 0:01:36.155 ******* 2026-02-27 01:11:59.324166 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-27 01:11:59.324178 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:11:59.324191 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-27 01:11:59.324202 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:11:59.324213 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-27 01:11:59.324224 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:11:59.324235 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-27 01:11:59.324253 | orchestrator | skipping: [t2026-02-27 01:11:59 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:11:59.324265 | orchestrator | 2026-02-27 01:11:59 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:11:59.324276 | orchestrator | 2026-02-27 01:11:59 | INFO  | Task 71583f3f-22aa-46f9-b821-33ecfa1823b1 is in state STARTED 2026-02-27 01:11:59.324287 | orchestrator | 2026-02-27 01:11:59 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:11:59.324298 | orchestrator | estbed-node-3] 2026-02-27 01:11:59.324309 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-27 01:11:59.324320 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:11:59.324330 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-27 01:11:59.324349 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:11:59.324360 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-02-27 01:11:59.324371 | orchestrator | 2026-02-27 01:11:59.324382 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-02-27 01:11:59.324393 | orchestrator | Friday 27 February 2026 01:09:44 +0000 (0:00:02.293) 0:01:38.449 ******* 2026-02-27 01:11:59.324404 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-27 01:11:59.324415 | orchestrator | 2026-02-27 01:11:59.324425 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-02-27 01:11:59.324437 | orchestrator | Friday 27 February 2026 01:09:45 +0000 (0:00:00.811) 0:01:39.260 ******* 2026-02-27 01:11:59.324448 | orchestrator | skipping: [testbed-manager] 2026-02-27 01:11:59.324459 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:11:59.324469 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:11:59.324480 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:11:59.324491 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:11:59.324502 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:11:59.324513 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:11:59.324524 | orchestrator | 2026-02-27 01:11:59.324534 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-02-27 01:11:59.324546 | orchestrator | Friday 27 February 2026 01:09:46 +0000 (0:00:01.062) 0:01:40.322 ******* 2026-02-27 01:11:59.324556 | orchestrator | skipping: [testbed-manager] 2026-02-27 01:11:59.324567 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:11:59.324578 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:11:59.324589 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:11:59.324600 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:11:59.324611 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:11:59.324622 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:11:59.324633 | orchestrator | 2026-02-27 01:11:59.324643 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-02-27 01:11:59.324654 | orchestrator | Friday 27 February 2026 01:09:49 +0000 (0:00:03.266) 0:01:43.588 ******* 2026-02-27 01:11:59.324665 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-27 01:11:59.324677 | orchestrator | skipping: [testbed-manager] 2026-02-27 01:11:59.324688 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-27 01:11:59.324698 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:11:59.324709 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-27 01:11:59.324720 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:11:59.324731 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-27 01:11:59.324741 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:11:59.324752 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-27 01:11:59.324763 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:11:59.324774 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-27 01:11:59.324785 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:11:59.324796 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-27 01:11:59.324806 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:11:59.324817 | orchestrator | 2026-02-27 01:11:59.324828 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-02-27 01:11:59.324839 | orchestrator | Friday 27 February 2026 01:09:52 +0000 (0:00:02.765) 0:01:46.354 ******* 2026-02-27 01:11:59.324855 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-27 01:11:59.324866 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-27 01:11:59.324885 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:11:59.324897 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:11:59.324908 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-27 01:11:59.324919 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:11:59.324930 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-27 01:11:59.324941 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-02-27 01:11:59.324952 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:11:59.324969 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-27 01:11:59.324980 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:11:59.324991 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-27 01:11:59.325059 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:11:59.325070 | orchestrator | 2026-02-27 01:11:59.325086 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-02-27 01:11:59.325104 | orchestrator | Friday 27 February 2026 01:09:54 +0000 (0:00:02.214) 0:01:48.568 ******* 2026-02-27 01:11:59.325123 | orchestrator | [WARNING]: Skipped 2026-02-27 01:11:59.325141 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-02-27 01:11:59.325160 | orchestrator | due to this access issue: 2026-02-27 01:11:59.325177 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-02-27 01:11:59.325195 | orchestrator | not a directory 2026-02-27 01:11:59.325213 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-27 01:11:59.325232 | orchestrator | 2026-02-27 01:11:59.325250 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-02-27 01:11:59.325269 | orchestrator | Friday 27 February 2026 01:09:56 +0000 (0:00:02.036) 0:01:50.605 ******* 2026-02-27 01:11:59.325288 | orchestrator | skipping: [testbed-manager] 2026-02-27 01:11:59.325307 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:11:59.325318 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:11:59.325329 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:11:59.325340 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:11:59.325351 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:11:59.325362 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:11:59.325372 | orchestrator | 2026-02-27 01:11:59.325383 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-02-27 01:11:59.325394 | orchestrator | Friday 27 February 2026 01:09:57 +0000 (0:00:01.386) 0:01:51.992 ******* 2026-02-27 01:11:59.325406 | orchestrator | skipping: [testbed-manager] 2026-02-27 01:11:59.325416 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:11:59.325427 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:11:59.325438 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:11:59.325448 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:11:59.325459 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:11:59.325470 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:11:59.325481 | orchestrator | 2026-02-27 01:11:59.325492 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-02-27 01:11:59.325503 | orchestrator | Friday 27 February 2026 01:09:59 +0000 (0:00:01.222) 0:01:53.214 ******* 2026-02-27 01:11:59.325516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-27 01:11:59.325539 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-27 01:11:59.325558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-27 01:11:59.325578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-27 01:11:59.325589 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-27 01:11:59.325599 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-27 01:11:59.325609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 01:11:59.325619 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-27 01:11:59.325635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 01:11:59.325646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 01:11:59.325664 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-27 01:11:59.325682 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-27 01:11:59.325693 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-27 01:11:59.325703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 01:11:59.325713 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-27 01:11:59.325731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 01:11:59.325741 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-27 01:11:59.325751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 01:11:59.325765 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-27 01:11:59.325782 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-27 01:11:59.325793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-27 01:11:59.325804 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-27 01:11:59.325822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-27 01:11:59.325833 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-27 01:11:59.325847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-27 01:11:59.325864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 01:11:59.325883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 01:11:59.325900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 01:11:59.325916 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-27 01:11:59.325942 | orchestrator | 2026-02-27 01:11:59.325960 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-02-27 01:11:59.325977 | orchestrator | Friday 27 February 2026 01:10:04 +0000 (0:00:05.052) 0:01:58.266 ******* 2026-02-27 01:11:59.326052 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-27 01:11:59.326067 | orchestrator | skipping: [testbed-manager] 2026-02-27 01:11:59.326077 | orchestrator | 2026-02-27 01:11:59.326087 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-27 01:11:59.326096 | orchestrator | Friday 27 February 2026 01:10:05 +0000 (0:00:01.866) 0:02:00.133 ******* 2026-02-27 01:11:59.326106 | orchestrator | 2026-02-27 01:11:59.326115 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-27 01:11:59.326125 | orchestrator | Friday 27 February 2026 01:10:06 +0000 (0:00:00.089) 0:02:00.223 ******* 2026-02-27 01:11:59.326134 | orchestrator | 2026-02-27 01:11:59.326144 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-27 01:11:59.326153 | orchestrator | Friday 27 February 2026 01:10:06 +0000 (0:00:00.150) 0:02:00.373 ******* 2026-02-27 01:11:59.326163 | orchestrator | 2026-02-27 01:11:59.326172 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-27 01:11:59.326182 | orchestrator | Friday 27 February 2026 01:10:06 +0000 (0:00:00.147) 0:02:00.521 ******* 2026-02-27 01:11:59.326191 | orchestrator | 2026-02-27 01:11:59.326200 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-27 01:11:59.326210 | orchestrator | Friday 27 February 2026 01:10:07 +0000 (0:00:00.661) 0:02:01.183 ******* 2026-02-27 01:11:59.326219 | orchestrator | 2026-02-27 01:11:59.326229 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-27 01:11:59.326238 | orchestrator | Friday 27 February 2026 01:10:07 +0000 (0:00:00.206) 0:02:01.389 ******* 2026-02-27 01:11:59.326247 | orchestrator | 2026-02-27 01:11:59.326257 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-27 01:11:59.326266 | orchestrator | Friday 27 February 2026 01:10:07 +0000 (0:00:00.135) 0:02:01.525 ******* 2026-02-27 01:11:59.326276 | orchestrator | 2026-02-27 01:11:59.326285 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-02-27 01:11:59.326295 | orchestrator | Friday 27 February 2026 01:10:07 +0000 (0:00:00.116) 0:02:01.641 ******* 2026-02-27 01:11:59.326304 | orchestrator | changed: [testbed-manager] 2026-02-27 01:11:59.326314 | orchestrator | 2026-02-27 01:11:59.326323 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-02-27 01:11:59.326333 | orchestrator | Friday 27 February 2026 01:10:30 +0000 (0:00:22.894) 0:02:24.536 ******* 2026-02-27 01:11:59.326342 | orchestrator | changed: [testbed-node-3] 2026-02-27 01:11:59.326352 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:11:59.326368 | orchestrator | changed: [testbed-manager] 2026-02-27 01:11:59.326377 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:11:59.326387 | orchestrator | changed: [testbed-node-5] 2026-02-27 01:11:59.326396 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:11:59.326406 | orchestrator | changed: [testbed-node-4] 2026-02-27 01:11:59.326415 | orchestrator | 2026-02-27 01:11:59.326425 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-02-27 01:11:59.326434 | orchestrator | Friday 27 February 2026 01:10:43 +0000 (0:00:12.763) 0:02:37.299 ******* 2026-02-27 01:11:59.326444 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:11:59.326454 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:11:59.326463 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:11:59.326472 | orchestrator | 2026-02-27 01:11:59.326482 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-02-27 01:11:59.326492 | orchestrator | Friday 27 February 2026 01:10:50 +0000 (0:00:07.799) 0:02:45.098 ******* 2026-02-27 01:11:59.326502 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:11:59.326529 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:11:59.326539 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:11:59.326549 | orchestrator | 2026-02-27 01:11:59.326558 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-02-27 01:11:59.326568 | orchestrator | Friday 27 February 2026 01:11:04 +0000 (0:00:13.169) 0:02:58.268 ******* 2026-02-27 01:11:59.326578 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:11:59.326587 | orchestrator | changed: [testbed-manager] 2026-02-27 01:11:59.326596 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:11:59.326606 | orchestrator | changed: [testbed-node-5] 2026-02-27 01:11:59.326615 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:11:59.326625 | orchestrator | changed: [testbed-node-4] 2026-02-27 01:11:59.326634 | orchestrator | changed: [testbed-node-3] 2026-02-27 01:11:59.326644 | orchestrator | 2026-02-27 01:11:59.326653 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-02-27 01:11:59.326663 | orchestrator | Friday 27 February 2026 01:11:20 +0000 (0:00:16.179) 0:03:14.447 ******* 2026-02-27 01:11:59.326673 | orchestrator | changed: [testbed-manager] 2026-02-27 01:11:59.326682 | orchestrator | 2026-02-27 01:11:59.326692 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-02-27 01:11:59.326701 | orchestrator | Friday 27 February 2026 01:11:28 +0000 (0:00:08.503) 0:03:22.951 ******* 2026-02-27 01:11:59.326711 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:11:59.326721 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:11:59.326730 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:11:59.326739 | orchestrator | 2026-02-27 01:11:59.326749 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-02-27 01:11:59.326759 | orchestrator | Friday 27 February 2026 01:11:38 +0000 (0:00:09.981) 0:03:32.932 ******* 2026-02-27 01:11:59.326768 | orchestrator | changed: [testbed-manager] 2026-02-27 01:11:59.326778 | orchestrator | 2026-02-27 01:11:59.326787 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-02-27 01:11:59.326797 | orchestrator | Friday 27 February 2026 01:11:44 +0000 (0:00:05.654) 0:03:38.586 ******* 2026-02-27 01:11:59.326806 | orchestrator | changed: [testbed-node-3] 2026-02-27 01:11:59.326816 | orchestrator | changed: [testbed-node-5] 2026-02-27 01:11:59.326825 | orchestrator | changed: [testbed-node-4] 2026-02-27 01:11:59.326834 | orchestrator | 2026-02-27 01:11:59.326844 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 01:11:59.326854 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-27 01:11:59.326864 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-27 01:11:59.326874 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-27 01:11:59.326884 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-27 01:11:59.326894 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-27 01:11:59.326904 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-27 01:11:59.326914 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-27 01:11:59.326923 | orchestrator | 2026-02-27 01:11:59.326932 | orchestrator | 2026-02-27 01:11:59.326942 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 01:11:59.326952 | orchestrator | Friday 27 February 2026 01:11:56 +0000 (0:00:11.756) 0:03:50.343 ******* 2026-02-27 01:11:59.326969 | orchestrator | =============================================================================== 2026-02-27 01:11:59.326979 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 33.46s 2026-02-27 01:11:59.326989 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 22.89s 2026-02-27 01:11:59.327019 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 20.06s 2026-02-27 01:11:59.327029 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 16.18s 2026-02-27 01:11:59.327039 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 13.17s 2026-02-27 01:11:59.327048 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 12.76s 2026-02-27 01:11:59.327062 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 11.76s 2026-02-27 01:11:59.327072 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 9.98s 2026-02-27 01:11:59.327082 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 9.56s 2026-02-27 01:11:59.327091 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.50s 2026-02-27 01:11:59.327101 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 7.80s 2026-02-27 01:11:59.327111 | orchestrator | prometheus : Copying over config.json files ----------------------------- 7.13s 2026-02-27 01:11:59.327120 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.65s 2026-02-27 01:11:59.327129 | orchestrator | prometheus : Check prometheus containers -------------------------------- 5.05s 2026-02-27 01:11:59.327139 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.20s 2026-02-27 01:11:59.327154 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.93s 2026-02-27 01:11:59.327164 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 3.36s 2026-02-27 01:11:59.327174 | orchestrator | prometheus : include_tasks ---------------------------------------------- 3.33s 2026-02-27 01:11:59.327184 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 3.27s 2026-02-27 01:11:59.327194 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 3.05s 2026-02-27 01:12:02.356568 | orchestrator | 2026-02-27 01:12:02 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:12:02.356974 | orchestrator | 2026-02-27 01:12:02 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:12:02.357574 | orchestrator | 2026-02-27 01:12:02 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:12:02.358573 | orchestrator | 2026-02-27 01:12:02 | INFO  | Task 71583f3f-22aa-46f9-b821-33ecfa1823b1 is in state STARTED 2026-02-27 01:12:02.358599 | orchestrator | 2026-02-27 01:12:02 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:12:05.391468 | orchestrator | 2026-02-27 01:12:05 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:12:05.391564 | orchestrator | 2026-02-27 01:12:05 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:12:05.393549 | orchestrator | 2026-02-27 01:12:05 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:12:05.394745 | orchestrator | 2026-02-27 01:12:05 | INFO  | Task 71583f3f-22aa-46f9-b821-33ecfa1823b1 is in state STARTED 2026-02-27 01:12:05.396528 | orchestrator | 2026-02-27 01:12:05 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:12:08.436271 | orchestrator | 2026-02-27 01:12:08 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:12:08.436440 | orchestrator | 2026-02-27 01:12:08 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:12:08.437239 | orchestrator | 2026-02-27 01:12:08 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:12:08.437675 | orchestrator | 2026-02-27 01:12:08 | INFO  | Task 71583f3f-22aa-46f9-b821-33ecfa1823b1 is in state STARTED 2026-02-27 01:12:08.437693 | orchestrator | 2026-02-27 01:12:08 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:12:11.467809 | orchestrator | 2026-02-27 01:12:11 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:12:11.468465 | orchestrator | 2026-02-27 01:12:11 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:12:11.469352 | orchestrator | 2026-02-27 01:12:11 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:12:11.470548 | orchestrator | 2026-02-27 01:12:11 | INFO  | Task 71583f3f-22aa-46f9-b821-33ecfa1823b1 is in state STARTED 2026-02-27 01:12:11.470593 | orchestrator | 2026-02-27 01:12:11 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:12:14.516610 | orchestrator | 2026-02-27 01:12:14 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:12:14.517413 | orchestrator | 2026-02-27 01:12:14 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:12:14.518667 | orchestrator | 2026-02-27 01:12:14 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:12:14.519925 | orchestrator | 2026-02-27 01:12:14 | INFO  | Task 71583f3f-22aa-46f9-b821-33ecfa1823b1 is in state STARTED 2026-02-27 01:12:14.520033 | orchestrator | 2026-02-27 01:12:14 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:12:17.560394 | orchestrator | 2026-02-27 01:12:17 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:12:17.561327 | orchestrator | 2026-02-27 01:12:17 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:12:17.572903 | orchestrator | 2026-02-27 01:12:17 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:12:17.578307 | orchestrator | 2026-02-27 01:12:17 | INFO  | Task 71583f3f-22aa-46f9-b821-33ecfa1823b1 is in state STARTED 2026-02-27 01:12:17.578388 | orchestrator | 2026-02-27 01:12:17 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:12:20.632392 | orchestrator | 2026-02-27 01:12:20 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:12:20.636326 | orchestrator | 2026-02-27 01:12:20 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:12:20.637930 | orchestrator | 2026-02-27 01:12:20 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:12:20.640428 | orchestrator | 2026-02-27 01:12:20 | INFO  | Task 71583f3f-22aa-46f9-b821-33ecfa1823b1 is in state STARTED 2026-02-27 01:12:20.640474 | orchestrator | 2026-02-27 01:12:20 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:12:23.691986 | orchestrator | 2026-02-27 01:12:23 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:12:23.695954 | orchestrator | 2026-02-27 01:12:23 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:12:23.696565 | orchestrator | 2026-02-27 01:12:23 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:12:23.698572 | orchestrator | 2026-02-27 01:12:23 | INFO  | Task 71583f3f-22aa-46f9-b821-33ecfa1823b1 is in state STARTED 2026-02-27 01:12:23.698967 | orchestrator | 2026-02-27 01:12:23 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:12:26.748366 | orchestrator | 2026-02-27 01:12:26 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:12:26.749941 | orchestrator | 2026-02-27 01:12:26 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:12:26.752394 | orchestrator | 2026-02-27 01:12:26 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:12:26.753552 | orchestrator | 2026-02-27 01:12:26 | INFO  | Task 71583f3f-22aa-46f9-b821-33ecfa1823b1 is in state STARTED 2026-02-27 01:12:26.753822 | orchestrator | 2026-02-27 01:12:26 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:12:29.801741 | orchestrator | 2026-02-27 01:12:29 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:12:29.803523 | orchestrator | 2026-02-27 01:12:29 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:12:29.805315 | orchestrator | 2026-02-27 01:12:29 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:12:29.808177 | orchestrator | 2026-02-27 01:12:29 | INFO  | Task 71583f3f-22aa-46f9-b821-33ecfa1823b1 is in state STARTED 2026-02-27 01:12:29.808225 | orchestrator | 2026-02-27 01:12:29 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:12:32.856666 | orchestrator | 2026-02-27 01:12:32 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state STARTED 2026-02-27 01:12:32.857998 | orchestrator | 2026-02-27 01:12:32 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:12:32.859711 | orchestrator | 2026-02-27 01:12:32 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:12:32.863378 | orchestrator | 2026-02-27 01:12:32 | INFO  | Task 71583f3f-22aa-46f9-b821-33ecfa1823b1 is in state STARTED 2026-02-27 01:12:32.863437 | orchestrator | 2026-02-27 01:12:32 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:12:35.913289 | orchestrator | 2026-02-27 01:12:35.913480 | orchestrator | 2026-02-27 01:12:35.913503 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-27 01:12:35.913518 | orchestrator | 2026-02-27 01:12:35.913533 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-27 01:12:35.913548 | orchestrator | Friday 27 February 2026 01:09:03 +0000 (0:00:00.381) 0:00:00.381 ******* 2026-02-27 01:12:35.913562 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:12:35.913577 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:12:35.913591 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:12:35.913603 | orchestrator | 2026-02-27 01:12:35.913615 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-27 01:12:35.913627 | orchestrator | Friday 27 February 2026 01:09:03 +0000 (0:00:00.409) 0:00:00.790 ******* 2026-02-27 01:12:35.913639 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-02-27 01:12:35.913652 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-02-27 01:12:35.913664 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-02-27 01:12:35.913675 | orchestrator | 2026-02-27 01:12:35.913686 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-02-27 01:12:35.913697 | orchestrator | 2026-02-27 01:12:35.913725 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-27 01:12:35.913738 | orchestrator | Friday 27 February 2026 01:09:04 +0000 (0:00:00.652) 0:00:01.442 ******* 2026-02-27 01:12:35.913749 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:12:35.913760 | orchestrator | 2026-02-27 01:12:35.913771 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-02-27 01:12:35.913781 | orchestrator | Friday 27 February 2026 01:09:05 +0000 (0:00:00.722) 0:00:02.165 ******* 2026-02-27 01:12:35.913791 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-02-27 01:12:35.913828 | orchestrator | 2026-02-27 01:12:35.913863 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-02-27 01:12:35.913875 | orchestrator | Friday 27 February 2026 01:09:08 +0000 (0:00:03.509) 0:00:05.675 ******* 2026-02-27 01:12:35.913886 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-02-27 01:12:35.913899 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-02-27 01:12:35.913910 | orchestrator | 2026-02-27 01:12:35.913922 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-02-27 01:12:35.913934 | orchestrator | Friday 27 February 2026 01:09:15 +0000 (0:00:06.897) 0:00:12.572 ******* 2026-02-27 01:12:35.913969 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-27 01:12:35.913982 | orchestrator | 2026-02-27 01:12:35.913995 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-02-27 01:12:35.914007 | orchestrator | Friday 27 February 2026 01:09:19 +0000 (0:00:03.946) 0:00:16.518 ******* 2026-02-27 01:12:35.914086 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-27 01:12:35.914099 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-02-27 01:12:35.914111 | orchestrator | 2026-02-27 01:12:35.914123 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-02-27 01:12:35.914135 | orchestrator | Friday 27 February 2026 01:09:23 +0000 (0:00:03.957) 0:00:20.476 ******* 2026-02-27 01:12:35.914163 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-27 01:12:35.914191 | orchestrator | 2026-02-27 01:12:35.914203 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-02-27 01:12:35.914215 | orchestrator | Friday 27 February 2026 01:09:26 +0000 (0:00:03.040) 0:00:23.516 ******* 2026-02-27 01:12:35.914240 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-02-27 01:12:35.914253 | orchestrator | 2026-02-27 01:12:35.914264 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-02-27 01:12:35.914276 | orchestrator | Friday 27 February 2026 01:09:29 +0000 (0:00:03.121) 0:00:26.638 ******* 2026-02-27 01:12:35.914328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-27 01:12:35.914352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-27 01:12:35.914388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-27 01:12:35.914401 | orchestrator | 2026-02-27 01:12:35.914412 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-27 01:12:35.914424 | orchestrator | Friday 27 February 2026 01:09:34 +0000 (0:00:04.959) 0:00:31.597 ******* 2026-02-27 01:12:35.914436 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:12:35.914461 | orchestrator | 2026-02-27 01:12:35.914479 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-02-27 01:12:35.914491 | orchestrator | Friday 27 February 2026 01:09:35 +0000 (0:00:00.895) 0:00:32.493 ******* 2026-02-27 01:12:35.914517 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:12:35.914529 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:12:35.914548 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:12:35.914558 | orchestrator | 2026-02-27 01:12:35.914568 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-02-27 01:12:35.914578 | orchestrator | Friday 27 February 2026 01:09:40 +0000 (0:00:05.037) 0:00:37.530 ******* 2026-02-27 01:12:35.914588 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-27 01:12:35.914598 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-27 01:12:35.914609 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-27 01:12:35.914619 | orchestrator | 2026-02-27 01:12:35.914634 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-02-27 01:12:35.914645 | orchestrator | Friday 27 February 2026 01:09:42 +0000 (0:00:02.000) 0:00:39.531 ******* 2026-02-27 01:12:35.914655 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-27 01:12:35.914667 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-27 01:12:35.914677 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-27 01:12:35.914688 | orchestrator | 2026-02-27 01:12:35.914699 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-02-27 01:12:35.914710 | orchestrator | Friday 27 February 2026 01:09:44 +0000 (0:00:01.690) 0:00:41.221 ******* 2026-02-27 01:12:35.914721 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:12:35.914733 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:12:35.914743 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:12:35.914769 | orchestrator | 2026-02-27 01:12:35.914781 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-02-27 01:12:35.914792 | orchestrator | Friday 27 February 2026 01:09:45 +0000 (0:00:00.917) 0:00:42.139 ******* 2026-02-27 01:12:35.914803 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:12:35.914814 | orchestrator | 2026-02-27 01:12:35.914825 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-02-27 01:12:35.914836 | orchestrator | Friday 27 February 2026 01:09:45 +0000 (0:00:00.150) 0:00:42.289 ******* 2026-02-27 01:12:35.914848 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:12:35.914860 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:12:35.914870 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:12:35.914881 | orchestrator | 2026-02-27 01:12:35.914892 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-27 01:12:35.914904 | orchestrator | Friday 27 February 2026 01:09:45 +0000 (0:00:00.345) 0:00:42.634 ******* 2026-02-27 01:12:35.914915 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:12:35.914927 | orchestrator | 2026-02-27 01:12:35.914938 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-02-27 01:12:35.914948 | orchestrator | Friday 27 February 2026 01:09:46 +0000 (0:00:00.620) 0:00:43.255 ******* 2026-02-27 01:12:35.914975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-27 01:12:35.915056 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-27 01:12:35.915074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-27 01:12:35.915095 | orchestrator | 2026-02-27 01:12:35.915107 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-02-27 01:12:35.915119 | orchestrator | Friday 27 February 2026 01:09:53 +0000 (0:00:07.307) 0:00:50.563 ******* 2026-02-27 01:12:35.915147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-27 01:12:35.915161 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:12:35.915174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-27 01:12:35.915186 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:12:35.915214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-27 01:12:35.915227 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:12:35.915238 | orchestrator | 2026-02-27 01:12:35.915250 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-02-27 01:12:35.915261 | orchestrator | Friday 27 February 2026 01:09:57 +0000 (0:00:04.198) 0:00:54.761 ******* 2026-02-27 01:12:35.915280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-27 01:12:35.915293 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:12:35.915309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-27 01:12:35.915329 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:12:35.915364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-27 01:12:35.915379 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:12:35.915405 | orchestrator | 2026-02-27 01:12:35.915417 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-02-27 01:12:35.915428 | orchestrator | Friday 27 February 2026 01:10:02 +0000 (0:00:05.039) 0:00:59.800 ******* 2026-02-27 01:12:35.915438 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:12:35.915450 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:12:35.915461 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:12:35.915473 | orchestrator | 2026-02-27 01:12:35.915484 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-02-27 01:12:35.915496 | orchestrator | Friday 27 February 2026 01:10:08 +0000 (0:00:05.512) 0:01:05.313 ******* 2026-02-27 01:12:35.915509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-27 01:12:35.915552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-27 01:12:35.915568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-27 01:12:35.915587 | orchestrator | 2026-02-27 01:12:35.915598 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-02-27 01:12:35.915609 | orchestrator | Friday 27 February 2026 01:10:15 +0000 (0:00:07.255) 0:01:12.568 ******* 2026-02-27 01:12:35.915620 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:12:35.915631 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:12:35.915642 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:12:35.915653 | orchestrator | 2026-02-27 01:12:35.915663 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-02-27 01:12:35.915673 | orchestrator | Friday 27 February 2026 01:10:24 +0000 (0:00:08.611) 0:01:21.180 ******* 2026-02-27 01:12:35.915683 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:12:35.915693 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:12:35.915702 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:12:35.915713 | orchestrator | 2026-02-27 01:12:35.915723 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-02-27 01:12:35.915734 | orchestrator | Friday 27 February 2026 01:10:29 +0000 (0:00:04.962) 0:01:26.142 ******* 2026-02-27 01:12:35.915744 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:12:35.915763 | orchestrator | skipping: [testbed-node2026-02-27 01:12:35 | INFO  | Task fe6163a7-80b9-4ead-b329-b84ddcc96205 is in state SUCCESS 2026-02-27 01:12:35.915776 | orchestrator | -0] 2026-02-27 01:12:35.915787 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:12:35.915797 | orchestrator | 2026-02-27 01:12:35.915808 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-02-27 01:12:35.915818 | orchestrator | Friday 27 February 2026 01:10:37 +0000 (0:00:08.623) 0:01:34.766 ******* 2026-02-27 01:12:35.915827 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:12:35.915837 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:12:35.915847 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:12:35.915857 | orchestrator | 2026-02-27 01:12:35.915868 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-02-27 01:12:35.915879 | orchestrator | Friday 27 February 2026 01:10:44 +0000 (0:00:06.599) 0:01:41.365 ******* 2026-02-27 01:12:35.915890 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:12:35.915900 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:12:35.915910 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:12:35.915921 | orchestrator | 2026-02-27 01:12:35.915939 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-02-27 01:12:35.915950 | orchestrator | Friday 27 February 2026 01:10:50 +0000 (0:00:06.578) 0:01:47.944 ******* 2026-02-27 01:12:35.915961 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:12:35.915972 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:12:35.915982 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:12:35.915992 | orchestrator | 2026-02-27 01:12:35.916003 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-02-27 01:12:35.916115 | orchestrator | Friday 27 February 2026 01:10:51 +0000 (0:00:00.483) 0:01:48.428 ******* 2026-02-27 01:12:35.916150 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-27 01:12:35.916162 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:12:35.916173 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-27 01:12:35.916183 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:12:35.916193 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-27 01:12:35.916204 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:12:35.916215 | orchestrator | 2026-02-27 01:12:35.916224 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-02-27 01:12:35.916235 | orchestrator | Friday 27 February 2026 01:10:58 +0000 (0:00:06.645) 0:01:55.074 ******* 2026-02-27 01:12:35.916246 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:12:35.916256 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:12:35.916266 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:12:35.916276 | orchestrator | 2026-02-27 01:12:35.916286 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-02-27 01:12:35.916296 | orchestrator | Friday 27 February 2026 01:11:03 +0000 (0:00:05.283) 0:02:00.357 ******* 2026-02-27 01:12:35.916309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-27 01:12:35.916345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-27 01:12:35.916367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-27 01:12:35.916379 | orchestrator | 2026-02-27 01:12:35.916390 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-27 01:12:35.916400 | orchestrator | Friday 27 February 2026 01:11:13 +0000 (0:00:10.119) 0:02:10.477 ******* 2026-02-27 01:12:35.916411 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:12:35.916422 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:12:35.916432 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:12:35.916443 | orchestrator | 2026-02-27 01:12:35.916454 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-02-27 01:12:35.916464 | orchestrator | Friday 27 February 2026 01:11:13 +0000 (0:00:00.435) 0:02:10.912 ******* 2026-02-27 01:12:35.916475 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:12:35.916484 | orchestrator | 2026-02-27 01:12:35.916495 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-02-27 01:12:35.916506 | orchestrator | Friday 27 February 2026 01:11:16 +0000 (0:00:02.271) 0:02:13.183 ******* 2026-02-27 01:12:35.916517 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:12:35.916527 | orchestrator | 2026-02-27 01:12:35.916538 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-02-27 01:12:35.916549 | orchestrator | Friday 27 February 2026 01:11:18 +0000 (0:00:02.132) 0:02:15.316 ******* 2026-02-27 01:12:35.916560 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:12:35.916570 | orchestrator | 2026-02-27 01:12:35.916581 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-02-27 01:12:35.916592 | orchestrator | Friday 27 February 2026 01:11:20 +0000 (0:00:02.041) 0:02:17.357 ******* 2026-02-27 01:12:35.916611 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:12:35.916632 | orchestrator | 2026-02-27 01:12:35.916643 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-02-27 01:12:35.916654 | orchestrator | Friday 27 February 2026 01:11:51 +0000 (0:00:31.442) 0:02:48.799 ******* 2026-02-27 01:12:35.916663 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:12:35.916673 | orchestrator | 2026-02-27 01:12:35.916682 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-27 01:12:35.916692 | orchestrator | Friday 27 February 2026 01:11:53 +0000 (0:00:01.814) 0:02:50.614 ******* 2026-02-27 01:12:35.916701 | orchestrator | 2026-02-27 01:12:35.916711 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-27 01:12:35.916721 | orchestrator | Friday 27 February 2026 01:11:53 +0000 (0:00:00.287) 0:02:50.901 ******* 2026-02-27 01:12:35.916730 | orchestrator | 2026-02-27 01:12:35.916740 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-27 01:12:35.916751 | orchestrator | Friday 27 February 2026 01:11:53 +0000 (0:00:00.067) 0:02:50.969 ******* 2026-02-27 01:12:35.916760 | orchestrator | 2026-02-27 01:12:35.916769 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-02-27 01:12:35.916785 | orchestrator | Friday 27 February 2026 01:11:53 +0000 (0:00:00.077) 0:02:51.047 ******* 2026-02-27 01:12:35.916795 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:12:35.916805 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:12:35.916833 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:12:35.916845 | orchestrator | 2026-02-27 01:12:35.916855 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 01:12:35.916867 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-27 01:12:35.916878 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-27 01:12:35.916889 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-27 01:12:35.916900 | orchestrator | 2026-02-27 01:12:35.916910 | orchestrator | 2026-02-27 01:12:35.916921 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 01:12:35.916931 | orchestrator | Friday 27 February 2026 01:12:33 +0000 (0:00:39.812) 0:03:30.859 ******* 2026-02-27 01:12:35.916942 | orchestrator | =============================================================================== 2026-02-27 01:12:35.916952 | orchestrator | glance : Restart glance-api container ---------------------------------- 39.81s 2026-02-27 01:12:35.916962 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 31.44s 2026-02-27 01:12:35.916973 | orchestrator | glance : Check glance containers --------------------------------------- 10.12s 2026-02-27 01:12:35.916984 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 8.62s 2026-02-27 01:12:35.917087 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 8.61s 2026-02-27 01:12:35.917098 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 7.31s 2026-02-27 01:12:35.917109 | orchestrator | glance : Copying over config.json files for services -------------------- 7.26s 2026-02-27 01:12:35.917120 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.90s 2026-02-27 01:12:35.917130 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 6.65s 2026-02-27 01:12:35.917140 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 6.60s 2026-02-27 01:12:35.917152 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 6.58s 2026-02-27 01:12:35.917163 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 5.51s 2026-02-27 01:12:35.917173 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 5.28s 2026-02-27 01:12:35.917195 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 5.04s 2026-02-27 01:12:35.917206 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 5.04s 2026-02-27 01:12:35.917233 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.96s 2026-02-27 01:12:35.917243 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.96s 2026-02-27 01:12:35.917253 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 4.20s 2026-02-27 01:12:35.917263 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.96s 2026-02-27 01:12:35.917272 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.95s 2026-02-27 01:12:35.917283 | orchestrator | 2026-02-27 01:12:35 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:12:35.917404 | orchestrator | 2026-02-27 01:12:35 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:12:35.919329 | orchestrator | 2026-02-27 01:12:35 | INFO  | Task 71583f3f-22aa-46f9-b821-33ecfa1823b1 is in state STARTED 2026-02-27 01:12:35.920551 | orchestrator | 2026-02-27 01:12:35 | INFO  | Task 50a6e96d-d6da-49ad-8267-0435d948d501 is in state STARTED 2026-02-27 01:12:35.920581 | orchestrator | 2026-02-27 01:12:35 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:12:38.973754 | orchestrator | 2026-02-27 01:12:38 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:12:38.974980 | orchestrator | 2026-02-27 01:12:38 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:12:38.977876 | orchestrator | 2026-02-27 01:12:38 | INFO  | Task 71583f3f-22aa-46f9-b821-33ecfa1823b1 is in state STARTED 2026-02-27 01:12:38.979122 | orchestrator | 2026-02-27 01:12:38 | INFO  | Task 50a6e96d-d6da-49ad-8267-0435d948d501 is in state STARTED 2026-02-27 01:12:38.979667 | orchestrator | 2026-02-27 01:12:38 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:12:42.033505 | orchestrator | 2026-02-27 01:12:42 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:12:42.035699 | orchestrator | 2026-02-27 01:12:42 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:12:42.043298 | orchestrator | 2026-02-27 01:12:42 | INFO  | Task 71583f3f-22aa-46f9-b821-33ecfa1823b1 is in state STARTED 2026-02-27 01:12:42.044710 | orchestrator | 2026-02-27 01:12:42 | INFO  | Task 50a6e96d-d6da-49ad-8267-0435d948d501 is in state STARTED 2026-02-27 01:12:42.044722 | orchestrator | 2026-02-27 01:12:42 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:12:45.085522 | orchestrator | 2026-02-27 01:12:45 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:12:45.087721 | orchestrator | 2026-02-27 01:12:45 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:12:45.089784 | orchestrator | 2026-02-27 01:12:45 | INFO  | Task 71583f3f-22aa-46f9-b821-33ecfa1823b1 is in state STARTED 2026-02-27 01:12:45.091836 | orchestrator | 2026-02-27 01:12:45 | INFO  | Task 50a6e96d-d6da-49ad-8267-0435d948d501 is in state STARTED 2026-02-27 01:12:45.091907 | orchestrator | 2026-02-27 01:12:45 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:12:48.126245 | orchestrator | 2026-02-27 01:12:48 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state STARTED 2026-02-27 01:12:48.126439 | orchestrator | 2026-02-27 01:12:48 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:12:48.127972 | orchestrator | 2026-02-27 01:12:48 | INFO  | Task 71583f3f-22aa-46f9-b821-33ecfa1823b1 is in state STARTED 2026-02-27 01:12:48.128932 | orchestrator | 2026-02-27 01:12:48 | INFO  | Task 50a6e96d-d6da-49ad-8267-0435d948d501 is in state STARTED 2026-02-27 01:12:48.128969 | orchestrator | 2026-02-27 01:12:48 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:12:51.165552 | orchestrator | 2026-02-27 01:12:51 | INFO  | Task daeaef11-0c9d-4526-a5c8-cf96b9738003 is in state SUCCESS 2026-02-27 01:12:51.167983 | orchestrator | 2026-02-27 01:12:51.168112 | orchestrator | 2026-02-27 01:12:51.168136 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-27 01:12:51.168156 | orchestrator | 2026-02-27 01:12:51.168175 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-27 01:12:51.168195 | orchestrator | Friday 27 February 2026 01:09:32 +0000 (0:00:00.446) 0:00:00.446 ******* 2026-02-27 01:12:51.168213 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:12:51.168233 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:12:51.168252 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:12:51.168269 | orchestrator | 2026-02-27 01:12:51.168288 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-27 01:12:51.168308 | orchestrator | Friday 27 February 2026 01:09:32 +0000 (0:00:00.379) 0:00:00.826 ******* 2026-02-27 01:12:51.168442 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-02-27 01:12:51.168464 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-02-27 01:12:51.168483 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-02-27 01:12:51.168504 | orchestrator | 2026-02-27 01:12:51.168523 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-02-27 01:12:51.168561 | orchestrator | 2026-02-27 01:12:51.168581 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-27 01:12:51.168600 | orchestrator | Friday 27 February 2026 01:09:33 +0000 (0:00:00.571) 0:00:01.397 ******* 2026-02-27 01:12:51.168619 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:12:51.168637 | orchestrator | 2026-02-27 01:12:51.168657 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-02-27 01:12:51.168676 | orchestrator | Friday 27 February 2026 01:09:33 +0000 (0:00:00.783) 0:00:02.181 ******* 2026-02-27 01:12:51.168696 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-02-27 01:12:51.168714 | orchestrator | 2026-02-27 01:12:51.168732 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-02-27 01:12:51.168743 | orchestrator | Friday 27 February 2026 01:09:37 +0000 (0:00:03.945) 0:00:06.127 ******* 2026-02-27 01:12:51.168754 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-02-27 01:12:51.168765 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-02-27 01:12:51.168776 | orchestrator | 2026-02-27 01:12:51.168787 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-02-27 01:12:51.168797 | orchestrator | Friday 27 February 2026 01:09:44 +0000 (0:00:06.745) 0:00:12.872 ******* 2026-02-27 01:12:51.168808 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-27 01:12:51.168819 | orchestrator | 2026-02-27 01:12:51.168830 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-02-27 01:12:51.168840 | orchestrator | Friday 27 February 2026 01:09:47 +0000 (0:00:03.182) 0:00:16.055 ******* 2026-02-27 01:12:51.168851 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-27 01:12:51.168861 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-02-27 01:12:51.168872 | orchestrator | 2026-02-27 01:12:51.168883 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-02-27 01:12:51.168898 | orchestrator | Friday 27 February 2026 01:09:51 +0000 (0:00:03.833) 0:00:19.888 ******* 2026-02-27 01:12:51.168916 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-27 01:12:51.168934 | orchestrator | 2026-02-27 01:12:51.168984 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-02-27 01:12:51.169004 | orchestrator | Friday 27 February 2026 01:09:55 +0000 (0:00:03.617) 0:00:23.506 ******* 2026-02-27 01:12:51.169070 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-02-27 01:12:51.169091 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-02-27 01:12:51.169109 | orchestrator | 2026-02-27 01:12:51.169128 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-02-27 01:12:51.169146 | orchestrator | Friday 27 February 2026 01:10:02 +0000 (0:00:07.100) 0:00:30.607 ******* 2026-02-27 01:12:51.169168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-27 01:12:51.169219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-27 01:12:51.169261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-27 01:12:51.169274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-27 01:12:51.169302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-27 01:12:51.169314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-27 01:12:51.169326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-27 01:12:51.169347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-27 01:12:51.169360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-27 01:12:51.169371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-27 01:12:51.169394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-27 01:12:51.169406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-27 01:12:51.169417 | orchestrator | 2026-02-27 01:12:51.169428 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-27 01:12:51.169439 | orchestrator | Friday 27 February 2026 01:10:04 +0000 (0:00:02.559) 0:00:33.166 ******* 2026-02-27 01:12:51.169450 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:12:51.169461 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:12:51.169472 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:12:51.169482 | orchestrator | 2026-02-27 01:12:51.169493 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-27 01:12:51.169504 | orchestrator | Friday 27 February 2026 01:10:05 +0000 (0:00:00.706) 0:00:33.872 ******* 2026-02-27 01:12:51.169514 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:12:51.169525 | orchestrator | 2026-02-27 01:12:51.169541 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-02-27 01:12:51.169553 | orchestrator | Friday 27 February 2026 01:10:06 +0000 (0:00:00.919) 0:00:34.792 ******* 2026-02-27 01:12:51.169563 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-02-27 01:12:51.169575 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-02-27 01:12:51.169586 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-02-27 01:12:51.169596 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-02-27 01:12:51.169607 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-02-27 01:12:51.169617 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-02-27 01:12:51.169628 | orchestrator | 2026-02-27 01:12:51.169638 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-02-27 01:12:51.169649 | orchestrator | Friday 27 February 2026 01:10:08 +0000 (0:00:02.211) 0:00:37.003 ******* 2026-02-27 01:12:51.169662 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-27 01:12:51.169681 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-27 01:12:51.169698 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-27 01:12:51.169710 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-27 01:12:51.169728 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-27 01:12:51.169740 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-27 01:12:51.169757 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-27 01:12:51.169774 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-27 01:12:51.169786 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-27 01:12:51.169808 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-27 01:12:51.169829 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-27 01:12:51.169859 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-27 01:12:51.169877 | orchestrator | 2026-02-27 01:12:51.169896 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-02-27 01:12:51.169907 | orchestrator | Friday 27 February 2026 01:10:14 +0000 (0:00:05.408) 0:00:42.412 ******* 2026-02-27 01:12:51.169918 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-27 01:12:51.169929 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-27 01:12:51.169940 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-27 01:12:51.169950 | orchestrator | 2026-02-27 01:12:51.169961 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-02-27 01:12:51.169976 | orchestrator | Friday 27 February 2026 01:10:16 +0000 (0:00:02.732) 0:00:45.144 ******* 2026-02-27 01:12:51.169987 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-02-27 01:12:51.169998 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-02-27 01:12:51.170008 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-02-27 01:12:51.170138 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-02-27 01:12:51.170151 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-02-27 01:12:51.170162 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-02-27 01:12:51.170172 | orchestrator | 2026-02-27 01:12:51.170183 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-02-27 01:12:51.170194 | orchestrator | Friday 27 February 2026 01:10:21 +0000 (0:00:04.320) 0:00:49.465 ******* 2026-02-27 01:12:51.170204 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-02-27 01:12:51.170215 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-02-27 01:12:51.170226 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-02-27 01:12:51.170237 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-02-27 01:12:51.170247 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-02-27 01:12:51.170258 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-02-27 01:12:51.170268 | orchestrator | 2026-02-27 01:12:51.170279 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-02-27 01:12:51.170290 | orchestrator | Friday 27 February 2026 01:10:22 +0000 (0:00:01.455) 0:00:50.921 ******* 2026-02-27 01:12:51.170301 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:12:51.170311 | orchestrator | 2026-02-27 01:12:51.170322 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-02-27 01:12:51.170333 | orchestrator | Friday 27 February 2026 01:10:22 +0000 (0:00:00.195) 0:00:51.117 ******* 2026-02-27 01:12:51.170344 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:12:51.170364 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:12:51.170383 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:12:51.170396 | orchestrator | 2026-02-27 01:12:51.170415 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-27 01:12:51.170434 | orchestrator | Friday 27 February 2026 01:10:23 +0000 (0:00:00.410) 0:00:51.528 ******* 2026-02-27 01:12:51.170453 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:12:51.170471 | orchestrator | 2026-02-27 01:12:51.170484 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-02-27 01:12:51.170494 | orchestrator | Friday 27 February 2026 01:10:24 +0000 (0:00:00.863) 0:00:52.391 ******* 2026-02-27 01:12:51.170506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-27 01:12:51.170518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-27 01:12:51.170535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-27 01:12:51.170547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-27 01:12:51.170589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-27 01:12:51.170610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-27 01:12:51.170629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-27 01:12:51.170649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-27 01:12:51.170675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-27 01:12:51.170694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-27 01:12:51.170730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-27 01:12:51.170742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-27 01:12:51.170753 | orchestrator | 2026-02-27 01:12:51.170764 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-02-27 01:12:51.170775 | orchestrator | Friday 27 February 2026 01:10:29 +0000 (0:00:05.191) 0:00:57.582 ******* 2026-02-27 01:12:51.170786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-27 01:12:51.170802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-27 01:12:51.170814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-27 01:12:51.170839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-27 01:12:51.170850 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:12:51.170862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-27 01:12:51.170873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-27 01:12:51.170884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-27 01:12:51.170907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-27 01:12:51.170978 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:12:51.171080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-27 01:12:51.171122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-27 01:12:51.171150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-27 01:12:51.171168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-27 01:12:51.171186 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:12:51.171204 | orchestrator | 2026-02-27 01:12:51.171222 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-02-27 01:12:51.171240 | orchestrator | Friday 27 February 2026 01:10:30 +0000 (0:00:01.193) 0:00:58.775 ******* 2026-02-27 01:12:51.171311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-27 01:12:51.171351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-27 01:12:51.171384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-27 01:12:51.171406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-27 01:12:51.171425 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:12:51.171444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-27 01:12:51.171470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-27 01:12:51.171502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-27 01:12:51.171522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-27 01:12:51.171551 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:12:51.171571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-27 01:12:51.171588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-27 01:12:51.171599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-27 01:12:51.171613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-27 01:12:51.171630 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:12:51.171640 | orchestrator | 2026-02-27 01:12:51.171650 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-02-27 01:12:51.171660 | orchestrator | Friday 27 February 2026 01:10:34 +0000 (0:00:04.354) 0:01:03.130 ******* 2026-02-27 01:12:51.171670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-27 01:12:51.171687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-27 01:12:51.171698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-27 01:12:51.171708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-27 01:12:51.171729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-27 01:12:51.171739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-27 01:12:51.171758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-27 01:12:51.171768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-27 01:12:51.171779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-27 01:12:51.171789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-27 01:12:51.171814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-27 01:12:51.171824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-27 01:12:51.171834 | orchestrator | 2026-02-27 01:12:51.171844 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-02-27 01:12:51.171854 | orchestrator | Friday 27 February 2026 01:10:41 +0000 (0:00:07.029) 0:01:10.160 ******* 2026-02-27 01:12:51.171864 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-27 01:12:51.171879 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-27 01:12:51.171889 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-27 01:12:51.171899 | orchestrator | 2026-02-27 01:12:51.171908 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-02-27 01:12:51.171918 | orchestrator | Friday 27 February 2026 01:10:44 +0000 (0:00:02.913) 0:01:13.073 ******* 2026-02-27 01:12:51.171927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-27 01:12:51.171938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-27 01:12:51.171958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-27 01:12:51.171968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-27 01:12:51.171986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-27 01:12:51.171996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-27 01:12:51.172006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-27 01:12:51.172052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-27 01:12:51.172069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-27 01:12:51.172080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-27 01:12:51.172097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-27 01:12:51.172107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-27 01:12:51.172117 | orchestrator | 2026-02-27 01:12:51.172127 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-02-27 01:12:51.172137 | orchestrator | Friday 27 February 2026 01:11:02 +0000 (0:00:18.075) 0:01:31.149 ******* 2026-02-27 01:12:51.172148 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:12:51.172164 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:12:51.172200 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:12:51.172216 | orchestrator | 2026-02-27 01:12:51.172231 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-02-27 01:12:51.172245 | orchestrator | Friday 27 February 2026 01:11:05 +0000 (0:00:02.224) 0:01:33.374 ******* 2026-02-27 01:12:51.172260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-27 01:12:51.172283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-27 01:12:51.172298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-27 01:12:51.172323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-27 01:12:51.172340 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:12:51.172357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-27 01:12:51.172386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-27 01:12:51.172410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-27 01:12:51.172428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-27 01:12:51.172445 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:12:51.172469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-27 01:12:51.172486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-27 01:12:51.172508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-27 01:12:51.172519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-27 01:12:51.172528 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:12:51.172538 | orchestrator | 2026-02-27 01:12:51.172548 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-02-27 01:12:51.172558 | orchestrator | Friday 27 February 2026 01:11:07 +0000 (0:00:02.874) 0:01:36.249 ******* 2026-02-27 01:12:51.172567 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:12:51.172577 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:12:51.172586 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:12:51.172596 | orchestrator | 2026-02-27 01:12:51.172605 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-02-27 01:12:51.172620 | orchestrator | Friday 27 February 2026 01:11:09 +0000 (0:00:01.316) 0:01:37.565 ******* 2026-02-27 01:12:51.172630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-27 01:12:51.172647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-27 01:12:51.172664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-27 01:12:51.172674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-27 01:12:51.172685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-27 01:12:51.172699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-27 01:12:51.172709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-27 01:12:51.172726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-27 01:12:51.172743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-27 01:12:51.172753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-27 01:12:51.172771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-27 01:12:51.172781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-27 01:12:51.172791 | orchestrator | 2026-02-27 01:12:51.172800 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-27 01:12:51.172810 | orchestrator | Friday 27 February 2026 01:11:14 +0000 (0:00:05.680) 0:01:43.245 ******* 2026-02-27 01:12:51.172820 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:12:51.172829 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:12:51.172838 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:12:51.172848 | orchestrator | 2026-02-27 01:12:51.172857 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-02-27 01:12:51.172867 | orchestrator | Friday 27 February 2026 01:11:15 +0000 (0:00:00.635) 0:01:43.881 ******* 2026-02-27 01:12:51.172876 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:12:51.172886 | orchestrator | 2026-02-27 01:12:51.172901 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-02-27 01:12:51.172911 | orchestrator | Friday 27 February 2026 01:11:17 +0000 (0:00:02.137) 0:01:46.019 ******* 2026-02-27 01:12:51.172921 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:12:51.172930 | orchestrator | 2026-02-27 01:12:51.172940 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-02-27 01:12:51.172955 | orchestrator | Friday 27 February 2026 01:11:20 +0000 (0:00:02.288) 0:01:48.308 ******* 2026-02-27 01:12:51.172965 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:12:51.172975 | orchestrator | 2026-02-27 01:12:51.172984 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-27 01:12:51.172994 | orchestrator | Friday 27 February 2026 01:11:43 +0000 (0:00:23.162) 0:02:11.470 ******* 2026-02-27 01:12:51.173003 | orchestrator | 2026-02-27 01:12:51.173012 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-27 01:12:51.173049 | orchestrator | Friday 27 February 2026 01:11:43 +0000 (0:00:00.071) 0:02:11.542 ******* 2026-02-27 01:12:51.173058 | orchestrator | 2026-02-27 01:12:51.173068 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-27 01:12:51.173077 | orchestrator | Friday 27 February 2026 01:11:43 +0000 (0:00:00.071) 0:02:11.614 ******* 2026-02-27 01:12:51.173087 | orchestrator | 2026-02-27 01:12:51.173096 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-02-27 01:12:51.173106 | orchestrator | Friday 27 February 2026 01:11:43 +0000 (0:00:00.071) 0:02:11.685 ******* 2026-02-27 01:12:51.173115 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:12:51.173125 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:12:51.173134 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:12:51.173144 | orchestrator | 2026-02-27 01:12:51.173153 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-02-27 01:12:51.173163 | orchestrator | Friday 27 February 2026 01:12:08 +0000 (0:00:25.293) 0:02:36.978 ******* 2026-02-27 01:12:51.173172 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:12:51.173182 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:12:51.173191 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:12:51.173200 | orchestrator | 2026-02-27 01:12:51.173210 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-02-27 01:12:51.173219 | orchestrator | Friday 27 February 2026 01:12:16 +0000 (0:00:07.408) 0:02:44.387 ******* 2026-02-27 01:12:51.173229 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:12:51.173238 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:12:51.173247 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:12:51.173257 | orchestrator | 2026-02-27 01:12:51.173266 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-02-27 01:12:51.173276 | orchestrator | Friday 27 February 2026 01:12:41 +0000 (0:00:24.938) 0:03:09.326 ******* 2026-02-27 01:12:51.173285 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:12:51.173295 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:12:51.173304 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:12:51.173314 | orchestrator | 2026-02-27 01:12:51.173323 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-02-27 01:12:51.173333 | orchestrator | Friday 27 February 2026 01:12:47 +0000 (0:00:06.831) 0:03:16.157 ******* 2026-02-27 01:12:51.173342 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:12:51.173352 | orchestrator | 2026-02-27 01:12:51.173361 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 01:12:51.173371 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-27 01:12:51.173382 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-27 01:12:51.173392 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-27 01:12:51.173408 | orchestrator | 2026-02-27 01:12:51.173418 | orchestrator | 2026-02-27 01:12:51.173427 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 01:12:51.173441 | orchestrator | Friday 27 February 2026 01:12:48 +0000 (0:00:00.287) 0:03:16.445 ******* 2026-02-27 01:12:51.173451 | orchestrator | =============================================================================== 2026-02-27 01:12:51.173461 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 25.29s 2026-02-27 01:12:51.173470 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 24.94s 2026-02-27 01:12:51.173480 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 23.16s 2026-02-27 01:12:51.173489 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 18.08s 2026-02-27 01:12:51.173499 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 7.41s 2026-02-27 01:12:51.173508 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.10s 2026-02-27 01:12:51.173518 | orchestrator | cinder : Copying over config.json files for services -------------------- 7.03s 2026-02-27 01:12:51.173527 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 6.83s 2026-02-27 01:12:51.173537 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.75s 2026-02-27 01:12:51.173546 | orchestrator | cinder : Check cinder containers ---------------------------------------- 5.68s 2026-02-27 01:12:51.173556 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 5.41s 2026-02-27 01:12:51.173565 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 5.19s 2026-02-27 01:12:51.173575 | orchestrator | service-cert-copy : cinder | Copying over backend internal TLS key ------ 4.36s 2026-02-27 01:12:51.173584 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 4.32s 2026-02-27 01:12:51.173593 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.95s 2026-02-27 01:12:51.173603 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.83s 2026-02-27 01:12:51.173612 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.62s 2026-02-27 01:12:51.173628 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.18s 2026-02-27 01:12:51.173638 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.91s 2026-02-27 01:12:51.173647 | orchestrator | cinder : Copying over existing policy file ------------------------------ 2.87s 2026-02-27 01:12:51.173657 | orchestrator | 2026-02-27 01:12:51 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:12:51.173667 | orchestrator | 2026-02-27 01:12:51 | INFO  | Task 71583f3f-22aa-46f9-b821-33ecfa1823b1 is in state STARTED 2026-02-27 01:12:51.173824 | orchestrator | 2026-02-27 01:12:51 | INFO  | Task 50a6e96d-d6da-49ad-8267-0435d948d501 is in state STARTED 2026-02-27 01:12:51.173840 | orchestrator | 2026-02-27 01:12:51 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:12:54.217095 | orchestrator | 2026-02-27 01:12:54 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:12:54.217777 | orchestrator | 2026-02-27 01:12:54 | INFO  | Task 71583f3f-22aa-46f9-b821-33ecfa1823b1 is in state STARTED 2026-02-27 01:12:54.218914 | orchestrator | 2026-02-27 01:12:54 | INFO  | Task 50a6e96d-d6da-49ad-8267-0435d948d501 is in state STARTED 2026-02-27 01:12:54.219702 | orchestrator | 2026-02-27 01:12:54 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:12:57.271917 | orchestrator | 2026-02-27 01:12:57 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:12:57.274457 | orchestrator | 2026-02-27 01:12:57 | INFO  | Task 71583f3f-22aa-46f9-b821-33ecfa1823b1 is in state STARTED 2026-02-27 01:12:57.276141 | orchestrator | 2026-02-27 01:12:57 | INFO  | Task 50a6e96d-d6da-49ad-8267-0435d948d501 is in state STARTED 2026-02-27 01:12:57.276185 | orchestrator | 2026-02-27 01:12:57 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:13:00.320233 | orchestrator | 2026-02-27 01:13:00 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:13:00.320645 | orchestrator | 2026-02-27 01:13:00 | INFO  | Task 71583f3f-22aa-46f9-b821-33ecfa1823b1 is in state STARTED 2026-02-27 01:13:00.321826 | orchestrator | 2026-02-27 01:13:00 | INFO  | Task 50a6e96d-d6da-49ad-8267-0435d948d501 is in state STARTED 2026-02-27 01:13:00.322579 | orchestrator | 2026-02-27 01:13:00 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:13:03.366712 | orchestrator | 2026-02-27 01:13:03 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:13:03.367275 | orchestrator | 2026-02-27 01:13:03 | INFO  | Task 71583f3f-22aa-46f9-b821-33ecfa1823b1 is in state STARTED 2026-02-27 01:13:03.368745 | orchestrator | 2026-02-27 01:13:03 | INFO  | Task 50a6e96d-d6da-49ad-8267-0435d948d501 is in state STARTED 2026-02-27 01:13:03.368804 | orchestrator | 2026-02-27 01:13:03 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:13:06.419789 | orchestrator | 2026-02-27 01:13:06 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:13:06.421695 | orchestrator | 2026-02-27 01:13:06 | INFO  | Task 71583f3f-22aa-46f9-b821-33ecfa1823b1 is in state STARTED 2026-02-27 01:13:06.423371 | orchestrator | 2026-02-27 01:13:06 | INFO  | Task 50a6e96d-d6da-49ad-8267-0435d948d501 is in state STARTED 2026-02-27 01:13:06.423387 | orchestrator | 2026-02-27 01:13:06 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:13:09.467084 | orchestrator | 2026-02-27 01:13:09 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:13:09.468089 | orchestrator | 2026-02-27 01:13:09 | INFO  | Task 71583f3f-22aa-46f9-b821-33ecfa1823b1 is in state STARTED 2026-02-27 01:13:09.468869 | orchestrator | 2026-02-27 01:13:09 | INFO  | Task 50a6e96d-d6da-49ad-8267-0435d948d501 is in state STARTED 2026-02-27 01:13:09.469333 | orchestrator | 2026-02-27 01:13:09 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:13:12.511180 | orchestrator | 2026-02-27 01:13:12 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:13:12.512839 | orchestrator | 2026-02-27 01:13:12 | INFO  | Task 71583f3f-22aa-46f9-b821-33ecfa1823b1 is in state STARTED 2026-02-27 01:13:12.514550 | orchestrator | 2026-02-27 01:13:12 | INFO  | Task 50a6e96d-d6da-49ad-8267-0435d948d501 is in state STARTED 2026-02-27 01:13:12.514602 | orchestrator | 2026-02-27 01:13:12 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:13:15.552986 | orchestrator | 2026-02-27 01:13:15 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:13:15.554374 | orchestrator | 2026-02-27 01:13:15 | INFO  | Task 71583f3f-22aa-46f9-b821-33ecfa1823b1 is in state STARTED 2026-02-27 01:13:15.556327 | orchestrator | 2026-02-27 01:13:15 | INFO  | Task 50a6e96d-d6da-49ad-8267-0435d948d501 is in state STARTED 2026-02-27 01:13:15.556377 | orchestrator | 2026-02-27 01:13:15 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:13:18.597135 | orchestrator | 2026-02-27 01:13:18 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:13:18.598213 | orchestrator | 2026-02-27 01:13:18 | INFO  | Task 71583f3f-22aa-46f9-b821-33ecfa1823b1 is in state STARTED 2026-02-27 01:13:18.599206 | orchestrator | 2026-02-27 01:13:18 | INFO  | Task 50a6e96d-d6da-49ad-8267-0435d948d501 is in state STARTED 2026-02-27 01:13:18.599383 | orchestrator | 2026-02-27 01:13:18 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:13:21.648930 | orchestrator | 2026-02-27 01:13:21 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:13:21.650191 | orchestrator | 2026-02-27 01:13:21 | INFO  | Task 71583f3f-22aa-46f9-b821-33ecfa1823b1 is in state STARTED 2026-02-27 01:13:21.652436 | orchestrator | 2026-02-27 01:13:21 | INFO  | Task 50a6e96d-d6da-49ad-8267-0435d948d501 is in state STARTED 2026-02-27 01:13:21.652513 | orchestrator | 2026-02-27 01:13:21 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:13:24.700297 | orchestrator | 2026-02-27 01:13:24 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:13:24.702215 | orchestrator | 2026-02-27 01:13:24 | INFO  | Task 71583f3f-22aa-46f9-b821-33ecfa1823b1 is in state STARTED 2026-02-27 01:13:24.703926 | orchestrator | 2026-02-27 01:13:24 | INFO  | Task 50a6e96d-d6da-49ad-8267-0435d948d501 is in state STARTED 2026-02-27 01:13:24.703963 | orchestrator | 2026-02-27 01:13:24 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:13:27.752517 | orchestrator | 2026-02-27 01:13:27 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:13:27.754859 | orchestrator | 2026-02-27 01:13:27 | INFO  | Task 71583f3f-22aa-46f9-b821-33ecfa1823b1 is in state STARTED 2026-02-27 01:13:27.757517 | orchestrator | 2026-02-27 01:13:27 | INFO  | Task 50a6e96d-d6da-49ad-8267-0435d948d501 is in state STARTED 2026-02-27 01:13:27.757879 | orchestrator | 2026-02-27 01:13:27 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:13:30.804023 | orchestrator | 2026-02-27 01:13:30 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:13:30.805538 | orchestrator | 2026-02-27 01:13:30 | INFO  | Task 71583f3f-22aa-46f9-b821-33ecfa1823b1 is in state STARTED 2026-02-27 01:13:30.806849 | orchestrator | 2026-02-27 01:13:30 | INFO  | Task 50a6e96d-d6da-49ad-8267-0435d948d501 is in state STARTED 2026-02-27 01:13:30.807319 | orchestrator | 2026-02-27 01:13:30 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:15:33.955540 | orchestrator | 2026-02-27 01:15:33.955711 | orchestrator | 2026-02-27 01:15:33.955734 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-27 01:15:33.955747 | orchestrator | 2026-02-27 01:15:33.955774 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-27 01:15:33.955787 | orchestrator | Friday 27 February 2026 01:12:02 +0000 (0:00:00.264) 0:00:00.264 ******* 2026-02-27 01:15:33.955798 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:15:33.955810 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:15:33.955821 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:15:33.955832 | orchestrator | 2026-02-27 01:15:33.955844 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-27 01:15:33.955854 | orchestrator | Friday 27 February 2026 01:12:03 +0000 (0:00:00.486) 0:00:00.751 ******* 2026-02-27 01:15:33.955865 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-02-27 01:15:33.955877 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-02-27 01:15:33.955887 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-02-27 01:15:33.955898 | orchestrator | 2026-02-27 01:15:33.955909 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-02-27 01:15:33.955920 | orchestrator | 2026-02-27 01:15:33.955931 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-02-27 01:15:33.955942 | orchestrator | Friday 27 February 2026 01:12:04 +0000 (0:00:00.904) 0:00:01.655 ******* 2026-02-27 01:15:33.955953 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:15:33.955963 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:15:33.955996 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:15:33.956007 | orchestrator | 2026-02-27 01:15:33.956018 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 01:15:33.956030 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 01:15:33.956042 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 01:15:33.956053 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 01:15:33.956086 | orchestrator | 2026-02-27 01:15:33.956097 | orchestrator | 2026-02-27 01:15:33.956109 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 01:15:33.956123 | orchestrator | Friday 27 February 2026 01:14:26 +0000 (0:02:22.221) 0:02:23.877 ******* 2026-02-27 01:15:33.956135 | orchestrator | =============================================================================== 2026-02-27 01:15:33.956147 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 142.22s 2026-02-27 01:15:33.956159 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.90s 2026-02-27 01:15:33.956172 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.49s 2026-02-27 01:15:33.956184 | orchestrator | 2026-02-27 01:15:33.956197 | orchestrator | 2026-02-27 01:15:33 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:15:33.956209 | orchestrator | 2026-02-27 01:15:33 | INFO  | Task 71583f3f-22aa-46f9-b821-33ecfa1823b1 is in state SUCCESS 2026-02-27 01:15:33.956222 | orchestrator | 2026-02-27 01:15:33 | INFO  | Task 50a6e96d-d6da-49ad-8267-0435d948d501 is in state SUCCESS 2026-02-27 01:15:33.959439 | orchestrator | 2026-02-27 01:15:33.959496 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-27 01:15:33.959509 | orchestrator | 2026-02-27 01:15:33.959520 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-27 01:15:33.959531 | orchestrator | Friday 27 February 2026 01:12:38 +0000 (0:00:00.269) 0:00:00.269 ******* 2026-02-27 01:15:33.959542 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:15:33.959553 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:15:33.959564 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:15:33.959575 | orchestrator | 2026-02-27 01:15:33.959586 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-27 01:15:33.959597 | orchestrator | Friday 27 February 2026 01:12:39 +0000 (0:00:00.332) 0:00:00.601 ******* 2026-02-27 01:15:33.959607 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-02-27 01:15:33.959619 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-02-27 01:15:33.959629 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-02-27 01:15:33.959640 | orchestrator | 2026-02-27 01:15:33.959651 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-02-27 01:15:33.959662 | orchestrator | 2026-02-27 01:15:33.959673 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-02-27 01:15:33.959684 | orchestrator | Friday 27 February 2026 01:12:39 +0000 (0:00:00.471) 0:00:01.073 ******* 2026-02-27 01:15:33.959694 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:15:33.959706 | orchestrator | 2026-02-27 01:15:33.959763 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-02-27 01:15:33.959774 | orchestrator | Friday 27 February 2026 01:12:40 +0000 (0:00:00.555) 0:00:01.628 ******* 2026-02-27 01:15:33.959799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-27 01:15:33.959863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-27 01:15:33.959876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-27 01:15:33.959887 | orchestrator | 2026-02-27 01:15:33.959898 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-02-27 01:15:33.959909 | orchestrator | Friday 27 February 2026 01:12:41 +0000 (0:00:00.797) 0:00:02.426 ******* 2026-02-27 01:15:33.959920 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-02-27 01:15:33.959931 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-02-27 01:15:33.959942 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-27 01:15:33.959952 | orchestrator | 2026-02-27 01:15:33.959963 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-02-27 01:15:33.959973 | orchestrator | Friday 27 February 2026 01:12:42 +0000 (0:00:00.954) 0:00:03.381 ******* 2026-02-27 01:15:33.959984 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:15:33.959995 | orchestrator | 2026-02-27 01:15:33.960005 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-02-27 01:15:33.960016 | orchestrator | Friday 27 February 2026 01:12:42 +0000 (0:00:00.766) 0:00:04.148 ******* 2026-02-27 01:15:33.960045 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-27 01:15:33.960080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-27 01:15:33.960106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-27 01:15:33.960118 | orchestrator | 2026-02-27 01:15:33.960129 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-02-27 01:15:33.960140 | orchestrator | Friday 27 February 2026 01:12:44 +0000 (0:00:01.727) 0:00:05.875 ******* 2026-02-27 01:15:33.960151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-27 01:15:33.960162 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:15:33.960173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-27 01:15:33.960184 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:15:33.960205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-27 01:15:33.960216 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:15:33.960228 | orchestrator | 2026-02-27 01:15:33.960239 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-02-27 01:15:33.960249 | orchestrator | Friday 27 February 2026 01:12:44 +0000 (0:00:00.420) 0:00:06.296 ******* 2026-02-27 01:15:33.960261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-27 01:15:33.960279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-27 01:15:33.960295 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:15:33.960306 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:15:33.960317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-27 01:15:33.960328 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:15:33.960339 | orchestrator | 2026-02-27 01:15:33.960349 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-02-27 01:15:33.960360 | orchestrator | Friday 27 February 2026 01:12:45 +0000 (0:00:00.847) 0:00:07.143 ******* 2026-02-27 01:15:33.960371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-27 01:15:33.960388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-27 01:15:33.960400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-27 01:15:33.960417 | orchestrator | 2026-02-27 01:15:33.960428 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-02-27 01:15:33.960439 | orchestrator | Friday 27 February 2026 01:12:47 +0000 (0:00:01.479) 0:00:08.622 ******* 2026-02-27 01:15:33.960452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-27 01:15:33.960484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-27 01:15:33.960510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-27 01:15:33.960529 | orchestrator | 2026-02-27 01:15:33.960547 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-02-27 01:15:33.960563 | orchestrator | Friday 27 February 2026 01:12:48 +0000 (0:00:01.461) 0:00:10.084 ******* 2026-02-27 01:15:33.960582 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:15:33.960601 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:15:33.960620 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:15:33.960637 | orchestrator | 2026-02-27 01:15:33.960656 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-02-27 01:15:33.960675 | orchestrator | Friday 27 February 2026 01:12:49 +0000 (0:00:00.575) 0:00:10.660 ******* 2026-02-27 01:15:33.960693 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-27 01:15:33.960711 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-27 01:15:33.960728 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-27 01:15:33.960739 | orchestrator | 2026-02-27 01:15:33.960750 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-02-27 01:15:33.960761 | orchestrator | Friday 27 February 2026 01:12:50 +0000 (0:00:01.270) 0:00:11.930 ******* 2026-02-27 01:15:33.960782 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-27 01:15:33.960801 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-27 01:15:33.960813 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-27 01:15:33.960823 | orchestrator | 2026-02-27 01:15:33.960834 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-02-27 01:15:33.960845 | orchestrator | Friday 27 February 2026 01:12:51 +0000 (0:00:01.317) 0:00:13.248 ******* 2026-02-27 01:15:33.960855 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-27 01:15:33.960866 | orchestrator | 2026-02-27 01:15:33.960877 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-02-27 01:15:33.960888 | orchestrator | Friday 27 February 2026 01:12:52 +0000 (0:00:00.820) 0:00:14.069 ******* 2026-02-27 01:15:33.960898 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-02-27 01:15:33.960909 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-02-27 01:15:33.960919 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:15:33.960930 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:15:33.960941 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:15:33.960952 | orchestrator | 2026-02-27 01:15:33.960962 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-02-27 01:15:33.960973 | orchestrator | Friday 27 February 2026 01:12:53 +0000 (0:00:00.697) 0:00:14.766 ******* 2026-02-27 01:15:33.960984 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:15:33.960994 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:15:33.961005 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:15:33.961015 | orchestrator | 2026-02-27 01:15:33.961026 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-02-27 01:15:33.961037 | orchestrator | Friday 27 February 2026 01:12:54 +0000 (0:00:00.593) 0:00:15.360 ******* 2026-02-27 01:15:33.961078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1088416, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2196765, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.961092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1088416, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2196765, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.961103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1088416, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2196765, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.961187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1088497, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.233675, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.961209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1088497, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.233675, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.961221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1088497, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.233675, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.961233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1088442, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2251344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.961249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1088442, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2251344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.961261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1088442, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2251344, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.961284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1088499, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2353675, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.961302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1088499, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2353675, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.961314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1088499, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2353675, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.961325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1088457, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.22854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.961341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1088457, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.22854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.961353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1088457, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.22854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.961372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1088485, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2328339, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.961384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1088485, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2328339, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.961402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1088485, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2328339, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.961414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1088415, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2194893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.961425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1088415, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2194893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.961440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1088415, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2194893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.961458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1088419, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2236166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.961488 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1088419, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2236166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.961795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1088419, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2236166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.961817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1088446, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.225554, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.961828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1088446, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.225554, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.961846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1088446, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.225554, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.961858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1088468, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2305055, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.961878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1088468, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2305055, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.961889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1088468, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2305055, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.961908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1088493, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.233402, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.961920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1088493, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.233402, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.961935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1088493, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.233402, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.961947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1088436, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.224685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.961963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1088436, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.224685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.961975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1088436, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.224685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.961992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1088482, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2320569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.962004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1088482, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2320569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.962165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1088482, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2320569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.962194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1088458, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2300131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.962240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1088458, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2300131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.962252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1088458, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2300131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.962272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1088455, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.22854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.962284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1088455, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.22854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.962295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1088455, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.22854, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.962311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1088451, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2269912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.962328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1088451, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2269912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.962339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1088451, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2269912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.962351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1088472, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2320569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.962367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1088472, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2320569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.962379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1088472, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2320569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.962390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1088448, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.225554, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.962412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1088448, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.225554, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.962425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1088448, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.225554, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.962439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1088490, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2332175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.962458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1088490, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2332175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.962471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1088490, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2332175, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.962487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1088635, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.268411, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.962528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1088635, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.268411, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.962548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1088635, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.268411, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.962567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1088532, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2465105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.962596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1088532, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2465105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.962616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1088532, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2465105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.962636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1088521, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2372751, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.962675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1088521, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2372751, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.962695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1088521, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2372751, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.962710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1088563, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2495542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.962723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1088563, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2495542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.962742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1088563, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2495542, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.962753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1088513, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2358453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.962776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1088513, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2358453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.962788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1088513, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2358453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.962799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1088609, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.261673, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.962810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1088609, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.261673, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.963197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1088609, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.261673, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.963227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1088565, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2558446, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.963254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1088565, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2558446, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.963266 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1088565, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2558446, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.963277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1088614, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.26204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.963288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1088614, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.26204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.963307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1088614, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.26204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.963319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1088629, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2670465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.963337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1088629, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2670465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.963353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1088629, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2670465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.963364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1088601, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2595108, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.963375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1088601, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2595108, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.963392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1088601, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2595108, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.963403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1088558, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2486544, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.963422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1088558, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2486544, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.963438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1088558, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2486544, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.963449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1088526, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2397208, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.963460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1088526, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2397208, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.963501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1088526, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2397208, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.963514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1088554, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2479088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.963532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1088554, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2479088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.963548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1088554, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2479088, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.963560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1088523, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2384348, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.963574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1088523, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2384348, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.963593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1088523, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2384348, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.963610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1088560, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2490847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.963628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1088560, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2490847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.963647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1088560, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2490847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.963659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1088624, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2655108, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.963671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1088624, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2655108, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.963682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1088624, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2655108, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.963698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1088618, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2641492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.963716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1088618, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2641492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.963727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1088618, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2641492, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.963743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1088516, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2360916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.963755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1088516, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2360916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.963766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1088516, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2360916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.963782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1088517, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2370064, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.963800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1088517, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2370064, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.963813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1088517, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.2370064, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.963831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1088596, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.258181, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.963845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1088596, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.258181, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.963858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1088596, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.258181, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.963877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1088616, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.26204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.963896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1088616, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.26204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.963909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1088616, 'dev': 149, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1772151530.26204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-27 01:15:33.963922 | orchestrator | 2026-02-27 01:15:33.963935 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-02-27 01:15:33.963947 | orchestrator | Friday 27 February 2026 01:13:32 +0000 (0:00:38.146) 0:00:53.506 ******* 2026-02-27 01:15:33.963964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-27 01:15:33.963977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-27 01:15:33.963991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-27 01:15:33.964008 | orchestrator | 2026-02-27 01:15:33.964021 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-02-27 01:15:33.964033 | orchestrator | Friday 27 February 2026 01:13:33 +0000 (0:00:01.189) 0:00:54.696 ******* 2026-02-27 01:15:33.964046 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:15:33.964079 | orchestrator | 2026-02-27 01:15:33.964092 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-02-27 01:15:33.964110 | orchestrator | Friday 27 February 2026 01:13:35 +0000 (0:00:02.387) 0:00:57.083 ******* 2026-02-27 01:15:33.964122 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:15:33.964135 | orchestrator | 2026-02-27 01:15:33.964148 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-27 01:15:33.964160 | orchestrator | Friday 27 February 2026 01:13:38 +0000 (0:00:02.563) 0:00:59.646 ******* 2026-02-27 01:15:33.964171 | orchestrator | 2026-02-27 01:15:33.964182 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-27 01:15:33.964193 | orchestrator | Friday 27 February 2026 01:13:38 +0000 (0:00:00.070) 0:00:59.717 ******* 2026-02-27 01:15:33.964204 | orchestrator | 2026-02-27 01:15:33.964214 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-27 01:15:33.964225 | orchestrator | Friday 27 February 2026 01:13:38 +0000 (0:00:00.302) 0:01:00.019 ******* 2026-02-27 01:15:33.964236 | orchestrator | 2026-02-27 01:15:33.964247 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-02-27 01:15:33.964258 | orchestrator | Friday 27 February 2026 01:13:38 +0000 (0:00:00.069) 0:01:00.089 ******* 2026-02-27 01:15:33.964268 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:15:33.964279 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:15:33.964290 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:15:33.964301 | orchestrator | 2026-02-27 01:15:33.964312 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-02-27 01:15:33.964322 | orchestrator | Friday 27 February 2026 01:13:40 +0000 (0:00:02.072) 0:01:02.162 ******* 2026-02-27 01:15:33.964333 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:15:33.964344 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:15:33.964355 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-02-27 01:15:33.964366 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-02-27 01:15:33.964377 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-02-27 01:15:33.964387 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2026-02-27 01:15:33.964398 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:15:33.964409 | orchestrator | 2026-02-27 01:15:33.964420 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-02-27 01:15:33.964431 | orchestrator | Friday 27 February 2026 01:14:32 +0000 (0:00:51.759) 0:01:53.921 ******* 2026-02-27 01:15:33.964442 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:15:33.964452 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:15:33.964463 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:15:33.964474 | orchestrator | 2026-02-27 01:15:33.964489 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-02-27 01:15:33.964500 | orchestrator | Friday 27 February 2026 01:15:03 +0000 (0:00:31.057) 0:02:24.979 ******* 2026-02-27 01:15:33.964511 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:15:33.964522 | orchestrator | 2026-02-27 01:15:33.964533 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-02-27 01:15:33.964543 | orchestrator | Friday 27 February 2026 01:15:06 +0000 (0:00:02.371) 0:02:27.350 ******* 2026-02-27 01:15:33.964554 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:15:33.964571 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:15:33.964582 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:15:33.964592 | orchestrator | 2026-02-27 01:15:33.964603 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-02-27 01:15:33.964614 | orchestrator | Friday 27 February 2026 01:15:06 +0000 (0:00:00.492) 0:02:27.843 ******* 2026-02-27 01:15:33.964626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-02-27 01:15:33.964638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-02-27 01:15:33.964650 | orchestrator | 2026-02-27 01:15:33.964661 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-02-27 01:15:33.964672 | orchestrator | Friday 27 February 2026 01:15:09 +0000 (0:00:02.532) 0:02:30.375 ******* 2026-02-27 01:15:33.964683 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:15:33.964693 | orchestrator | 2026-02-27 01:15:33.964704 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 01:15:33.964715 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-27 01:15:33.964727 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-27 01:15:33.964738 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-27 01:15:33.964749 | orchestrator | 2026-02-27 01:15:33.964759 | orchestrator | 2026-02-27 01:15:33.964770 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 01:15:33.964781 | orchestrator | Friday 27 February 2026 01:15:09 +0000 (0:00:00.297) 0:02:30.674 ******* 2026-02-27 01:15:33.964797 | orchestrator | =============================================================================== 2026-02-27 01:15:33.964808 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 51.76s 2026-02-27 01:15:33.964819 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 38.15s 2026-02-27 01:15:33.964829 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 31.06s 2026-02-27 01:15:33.964840 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.56s 2026-02-27 01:15:33.964851 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.53s 2026-02-27 01:15:33.964862 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.39s 2026-02-27 01:15:33.964873 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.37s 2026-02-27 01:15:33.964883 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.07s 2026-02-27 01:15:33.964894 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.73s 2026-02-27 01:15:33.964905 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.48s 2026-02-27 01:15:33.964915 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.46s 2026-02-27 01:15:33.964926 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.32s 2026-02-27 01:15:33.964937 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.27s 2026-02-27 01:15:33.964947 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.19s 2026-02-27 01:15:33.964964 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.96s 2026-02-27 01:15:33.964974 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.85s 2026-02-27 01:15:33.964985 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.82s 2026-02-27 01:15:33.964996 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.80s 2026-02-27 01:15:33.965006 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.77s 2026-02-27 01:15:33.965017 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.70s 2026-02-27 01:15:33.965028 | orchestrator | 2026-02-27 01:15:33 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:15:36.988407 | orchestrator | 2026-02-27 01:15:36 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:15:36.989382 | orchestrator | 2026-02-27 01:15:36 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:15:36.989423 | orchestrator | 2026-02-27 01:15:36 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:15:40.127133 | orchestrator | 2026-02-27 01:15:40 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:15:40.127574 | orchestrator | 2026-02-27 01:15:40 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:15:40.127608 | orchestrator | 2026-02-27 01:15:40 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:15:43.176048 | orchestrator | 2026-02-27 01:15:43 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:15:43.178783 | orchestrator | 2026-02-27 01:15:43 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:15:43.178862 | orchestrator | 2026-02-27 01:15:43 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:15:46.223555 | orchestrator | 2026-02-27 01:15:46 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:15:46.223643 | orchestrator | 2026-02-27 01:15:46 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:15:46.223652 | orchestrator | 2026-02-27 01:15:46 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:15:49.256671 | orchestrator | 2026-02-27 01:15:49 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:15:49.256851 | orchestrator | 2026-02-27 01:15:49 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:15:49.256865 | orchestrator | 2026-02-27 01:15:49 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:15:52.304836 | orchestrator | 2026-02-27 01:15:52 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:15:52.307083 | orchestrator | 2026-02-27 01:15:52 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:15:52.307648 | orchestrator | 2026-02-27 01:15:52 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:15:55.352456 | orchestrator | 2026-02-27 01:15:55 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:15:55.352554 | orchestrator | 2026-02-27 01:15:55 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:15:55.352567 | orchestrator | 2026-02-27 01:15:55 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:15:58.398730 | orchestrator | 2026-02-27 01:15:58 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:15:58.401699 | orchestrator | 2026-02-27 01:15:58 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:15:58.402136 | orchestrator | 2026-02-27 01:15:58 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:16:01.442546 | orchestrator | 2026-02-27 01:16:01 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:16:01.445223 | orchestrator | 2026-02-27 01:16:01 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:16:01.445625 | orchestrator | 2026-02-27 01:16:01 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:16:04.493296 | orchestrator | 2026-02-27 01:16:04 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:16:04.495764 | orchestrator | 2026-02-27 01:16:04 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:16:04.495819 | orchestrator | 2026-02-27 01:16:04 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:16:07.540873 | orchestrator | 2026-02-27 01:16:07 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:16:07.542406 | orchestrator | 2026-02-27 01:16:07 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:16:07.542459 | orchestrator | 2026-02-27 01:16:07 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:16:10.586283 | orchestrator | 2026-02-27 01:16:10 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:16:10.587953 | orchestrator | 2026-02-27 01:16:10 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:16:10.588013 | orchestrator | 2026-02-27 01:16:10 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:16:13.632818 | orchestrator | 2026-02-27 01:16:13 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:16:13.634615 | orchestrator | 2026-02-27 01:16:13 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:16:13.634645 | orchestrator | 2026-02-27 01:16:13 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:16:16.686921 | orchestrator | 2026-02-27 01:16:16 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:16:16.689374 | orchestrator | 2026-02-27 01:16:16 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:16:16.689456 | orchestrator | 2026-02-27 01:16:16 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:16:19.730908 | orchestrator | 2026-02-27 01:16:19 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:16:19.733383 | orchestrator | 2026-02-27 01:16:19 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:16:19.733431 | orchestrator | 2026-02-27 01:16:19 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:16:22.776404 | orchestrator | 2026-02-27 01:16:22 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:16:22.776905 | orchestrator | 2026-02-27 01:16:22 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:16:22.776938 | orchestrator | 2026-02-27 01:16:22 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:16:25.827593 | orchestrator | 2026-02-27 01:16:25 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:16:25.829412 | orchestrator | 2026-02-27 01:16:25 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:16:25.829597 | orchestrator | 2026-02-27 01:16:25 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:16:28.871642 | orchestrator | 2026-02-27 01:16:28 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:16:28.873194 | orchestrator | 2026-02-27 01:16:28 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:16:28.873287 | orchestrator | 2026-02-27 01:16:28 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:16:31.926467 | orchestrator | 2026-02-27 01:16:31 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:16:31.928102 | orchestrator | 2026-02-27 01:16:31 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:16:31.928152 | orchestrator | 2026-02-27 01:16:31 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:16:34.972986 | orchestrator | 2026-02-27 01:16:34 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:16:34.974669 | orchestrator | 2026-02-27 01:16:34 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:16:34.975004 | orchestrator | 2026-02-27 01:16:34 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:16:38.032762 | orchestrator | 2026-02-27 01:16:38 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:16:38.033443 | orchestrator | 2026-02-27 01:16:38 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:16:38.033526 | orchestrator | 2026-02-27 01:16:38 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:16:41.084984 | orchestrator | 2026-02-27 01:16:41 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:16:41.085127 | orchestrator | 2026-02-27 01:16:41 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:16:41.085149 | orchestrator | 2026-02-27 01:16:41 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:16:44.119690 | orchestrator | 2026-02-27 01:16:44 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:16:44.121283 | orchestrator | 2026-02-27 01:16:44 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:16:44.121361 | orchestrator | 2026-02-27 01:16:44 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:16:47.169194 | orchestrator | 2026-02-27 01:16:47 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:16:47.171184 | orchestrator | 2026-02-27 01:16:47 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:16:47.171247 | orchestrator | 2026-02-27 01:16:47 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:16:50.227126 | orchestrator | 2026-02-27 01:16:50 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:16:50.228792 | orchestrator | 2026-02-27 01:16:50 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:16:50.228832 | orchestrator | 2026-02-27 01:16:50 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:16:53.305739 | orchestrator | 2026-02-27 01:16:53 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:16:53.307763 | orchestrator | 2026-02-27 01:16:53 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:16:53.308341 | orchestrator | 2026-02-27 01:16:53 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:16:56.362622 | orchestrator | 2026-02-27 01:16:56 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:16:56.365884 | orchestrator | 2026-02-27 01:16:56 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:16:56.365958 | orchestrator | 2026-02-27 01:16:56 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:16:59.412278 | orchestrator | 2026-02-27 01:16:59 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:16:59.416576 | orchestrator | 2026-02-27 01:16:59 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:16:59.416637 | orchestrator | 2026-02-27 01:16:59 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:17:02.463218 | orchestrator | 2026-02-27 01:17:02 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:17:02.464982 | orchestrator | 2026-02-27 01:17:02 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:17:02.465024 | orchestrator | 2026-02-27 01:17:02 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:17:05.506004 | orchestrator | 2026-02-27 01:17:05 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:17:05.507155 | orchestrator | 2026-02-27 01:17:05 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:17:05.507196 | orchestrator | 2026-02-27 01:17:05 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:17:08.559653 | orchestrator | 2026-02-27 01:17:08 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:17:08.561721 | orchestrator | 2026-02-27 01:17:08 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:17:08.562299 | orchestrator | 2026-02-27 01:17:08 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:17:11.605221 | orchestrator | 2026-02-27 01:17:11 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:17:11.607772 | orchestrator | 2026-02-27 01:17:11 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:17:11.607836 | orchestrator | 2026-02-27 01:17:11 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:17:14.657014 | orchestrator | 2026-02-27 01:17:14 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:17:14.658516 | orchestrator | 2026-02-27 01:17:14 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:17:14.658576 | orchestrator | 2026-02-27 01:17:14 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:17:17.706601 | orchestrator | 2026-02-27 01:17:17 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:17:17.708564 | orchestrator | 2026-02-27 01:17:17 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:17:17.708653 | orchestrator | 2026-02-27 01:17:17 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:17:20.744439 | orchestrator | 2026-02-27 01:17:20 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:17:20.745258 | orchestrator | 2026-02-27 01:17:20 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:17:20.745297 | orchestrator | 2026-02-27 01:17:20 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:17:23.783833 | orchestrator | 2026-02-27 01:17:23 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:17:23.784872 | orchestrator | 2026-02-27 01:17:23 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:17:23.784904 | orchestrator | 2026-02-27 01:17:23 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:17:26.813766 | orchestrator | 2026-02-27 01:17:26 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:17:26.815644 | orchestrator | 2026-02-27 01:17:26 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:17:26.815706 | orchestrator | 2026-02-27 01:17:26 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:17:29.857553 | orchestrator | 2026-02-27 01:17:29 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:17:29.858250 | orchestrator | 2026-02-27 01:17:29 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:17:29.858313 | orchestrator | 2026-02-27 01:17:29 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:17:32.904765 | orchestrator | 2026-02-27 01:17:32 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:17:32.904851 | orchestrator | 2026-02-27 01:17:32 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:17:32.904861 | orchestrator | 2026-02-27 01:17:32 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:17:35.951129 | orchestrator | 2026-02-27 01:17:35 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:17:35.953980 | orchestrator | 2026-02-27 01:17:35 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:17:35.954252 | orchestrator | 2026-02-27 01:17:35 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:17:39.007518 | orchestrator | 2026-02-27 01:17:39 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:17:39.010519 | orchestrator | 2026-02-27 01:17:39 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:17:39.010591 | orchestrator | 2026-02-27 01:17:39 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:17:42.057564 | orchestrator | 2026-02-27 01:17:42 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:17:42.058407 | orchestrator | 2026-02-27 01:17:42 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:17:42.058588 | orchestrator | 2026-02-27 01:17:42 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:17:45.101069 | orchestrator | 2026-02-27 01:17:45 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:17:45.101528 | orchestrator | 2026-02-27 01:17:45 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:17:45.101833 | orchestrator | 2026-02-27 01:17:45 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:17:48.151163 | orchestrator | 2026-02-27 01:17:48 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:17:48.151815 | orchestrator | 2026-02-27 01:17:48 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:17:48.151836 | orchestrator | 2026-02-27 01:17:48 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:17:51.197351 | orchestrator | 2026-02-27 01:17:51 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:17:51.197793 | orchestrator | 2026-02-27 01:17:51 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:17:51.197821 | orchestrator | 2026-02-27 01:17:51 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:17:54.240941 | orchestrator | 2026-02-27 01:17:54 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:17:54.242373 | orchestrator | 2026-02-27 01:17:54 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:17:54.243166 | orchestrator | 2026-02-27 01:17:54 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:17:57.295410 | orchestrator | 2026-02-27 01:17:57 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:17:57.296238 | orchestrator | 2026-02-27 01:17:57 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:17:57.296306 | orchestrator | 2026-02-27 01:17:57 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:18:00.346528 | orchestrator | 2026-02-27 01:18:00 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:18:00.347727 | orchestrator | 2026-02-27 01:18:00 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:18:00.347832 | orchestrator | 2026-02-27 01:18:00 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:18:03.395961 | orchestrator | 2026-02-27 01:18:03 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:18:03.398696 | orchestrator | 2026-02-27 01:18:03 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:18:03.398789 | orchestrator | 2026-02-27 01:18:03 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:18:06.435407 | orchestrator | 2026-02-27 01:18:06 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:18:06.436405 | orchestrator | 2026-02-27 01:18:06 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:18:06.436470 | orchestrator | 2026-02-27 01:18:06 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:18:09.471262 | orchestrator | 2026-02-27 01:18:09 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:18:09.472227 | orchestrator | 2026-02-27 01:18:09 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:18:09.472264 | orchestrator | 2026-02-27 01:18:09 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:18:12.511224 | orchestrator | 2026-02-27 01:18:12 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:18:12.513389 | orchestrator | 2026-02-27 01:18:12 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:18:12.513428 | orchestrator | 2026-02-27 01:18:12 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:18:15.547461 | orchestrator | 2026-02-27 01:18:15 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:18:15.549239 | orchestrator | 2026-02-27 01:18:15 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:18:15.549297 | orchestrator | 2026-02-27 01:18:15 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:18:18.599183 | orchestrator | 2026-02-27 01:18:18 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:18:18.600066 | orchestrator | 2026-02-27 01:18:18 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:18:18.600081 | orchestrator | 2026-02-27 01:18:18 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:18:21.634423 | orchestrator | 2026-02-27 01:18:21 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:18:21.635743 | orchestrator | 2026-02-27 01:18:21 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:18:21.635793 | orchestrator | 2026-02-27 01:18:21 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:18:24.683827 | orchestrator | 2026-02-27 01:18:24 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:18:24.685497 | orchestrator | 2026-02-27 01:18:24 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:18:24.685525 | orchestrator | 2026-02-27 01:18:24 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:18:27.728699 | orchestrator | 2026-02-27 01:18:27 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:18:27.730776 | orchestrator | 2026-02-27 01:18:27 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:18:27.730824 | orchestrator | 2026-02-27 01:18:27 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:18:30.779172 | orchestrator | 2026-02-27 01:18:30 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:18:30.781147 | orchestrator | 2026-02-27 01:18:30 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:18:30.781209 | orchestrator | 2026-02-27 01:18:30 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:18:33.818079 | orchestrator | 2026-02-27 01:18:33 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:18:33.818761 | orchestrator | 2026-02-27 01:18:33 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:18:33.818849 | orchestrator | 2026-02-27 01:18:33 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:18:36.859353 | orchestrator | 2026-02-27 01:18:36 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:18:36.861596 | orchestrator | 2026-02-27 01:18:36 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:18:36.861663 | orchestrator | 2026-02-27 01:18:36 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:18:39.909909 | orchestrator | 2026-02-27 01:18:39 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:18:39.911904 | orchestrator | 2026-02-27 01:18:39 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:18:39.911939 | orchestrator | 2026-02-27 01:18:39 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:18:42.958128 | orchestrator | 2026-02-27 01:18:42 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:18:42.960160 | orchestrator | 2026-02-27 01:18:42 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:18:42.960316 | orchestrator | 2026-02-27 01:18:42 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:18:46.010933 | orchestrator | 2026-02-27 01:18:46 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:18:46.012621 | orchestrator | 2026-02-27 01:18:46 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:18:46.012657 | orchestrator | 2026-02-27 01:18:46 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:18:49.067736 | orchestrator | 2026-02-27 01:18:49 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:18:49.067873 | orchestrator | 2026-02-27 01:18:49 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:18:49.067895 | orchestrator | 2026-02-27 01:18:49 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:18:52.109958 | orchestrator | 2026-02-27 01:18:52 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:18:52.112008 | orchestrator | 2026-02-27 01:18:52 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:18:52.112944 | orchestrator | 2026-02-27 01:18:52 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:18:55.157344 | orchestrator | 2026-02-27 01:18:55 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:18:55.158235 | orchestrator | 2026-02-27 01:18:55 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:18:55.158258 | orchestrator | 2026-02-27 01:18:55 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:18:58.212388 | orchestrator | 2026-02-27 01:18:58 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state STARTED 2026-02-27 01:18:58.213748 | orchestrator | 2026-02-27 01:18:58 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:18:58.214295 | orchestrator | 2026-02-27 01:18:58 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:19:01.266786 | orchestrator | 2026-02-27 01:19:01 | INFO  | Task d4f3520d-57b8-4119-a9a5-552552c88680 is in state SUCCESS 2026-02-27 01:19:01.269580 | orchestrator | 2026-02-27 01:19:01.269745 | orchestrator | 2026-02-27 01:19:01.269764 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-27 01:19:01.269777 | orchestrator | 2026-02-27 01:19:01.269788 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-02-27 01:19:01.269800 | orchestrator | Friday 27 February 2026 01:09:51 +0000 (0:00:00.442) 0:00:00.442 ******* 2026-02-27 01:19:01.269811 | orchestrator | changed: [testbed-manager] 2026-02-27 01:19:01.269952 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:19:01.269973 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:19:01.269992 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:19:01.270010 | orchestrator | changed: [testbed-node-3] 2026-02-27 01:19:01.270122 | orchestrator | changed: [testbed-node-4] 2026-02-27 01:19:01.270182 | orchestrator | changed: [testbed-node-5] 2026-02-27 01:19:01.270201 | orchestrator | 2026-02-27 01:19:01.270220 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-27 01:19:01.270239 | orchestrator | Friday 27 February 2026 01:09:53 +0000 (0:00:01.199) 0:00:01.641 ******* 2026-02-27 01:19:01.270257 | orchestrator | changed: [testbed-manager] 2026-02-27 01:19:01.270275 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:19:01.270294 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:19:01.270305 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:19:01.270316 | orchestrator | changed: [testbed-node-3] 2026-02-27 01:19:01.270327 | orchestrator | changed: [testbed-node-4] 2026-02-27 01:19:01.270344 | orchestrator | changed: [testbed-node-5] 2026-02-27 01:19:01.270362 | orchestrator | 2026-02-27 01:19:01.270382 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-27 01:19:01.270404 | orchestrator | Friday 27 February 2026 01:09:54 +0000 (0:00:01.312) 0:00:02.953 ******* 2026-02-27 01:19:01.270423 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-02-27 01:19:01.270440 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-02-27 01:19:01.270479 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-02-27 01:19:01.270497 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-02-27 01:19:01.270529 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-02-27 01:19:01.270548 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-02-27 01:19:01.270567 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-02-27 01:19:01.270586 | orchestrator | 2026-02-27 01:19:01.270598 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-02-27 01:19:01.270609 | orchestrator | 2026-02-27 01:19:01.270620 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-02-27 01:19:01.270630 | orchestrator | Friday 27 February 2026 01:09:56 +0000 (0:00:01.630) 0:00:04.584 ******* 2026-02-27 01:19:01.270641 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:19:01.270652 | orchestrator | 2026-02-27 01:19:01.270663 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-02-27 01:19:01.270674 | orchestrator | Friday 27 February 2026 01:09:57 +0000 (0:00:01.349) 0:00:05.933 ******* 2026-02-27 01:19:01.270702 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-02-27 01:19:01.270714 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-02-27 01:19:01.270725 | orchestrator | 2026-02-27 01:19:01.270735 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-02-27 01:19:01.270771 | orchestrator | Friday 27 February 2026 01:10:01 +0000 (0:00:04.471) 0:00:10.405 ******* 2026-02-27 01:19:01.270783 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-27 01:19:01.270794 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-27 01:19:01.270805 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:19:01.270816 | orchestrator | 2026-02-27 01:19:01.270826 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-02-27 01:19:01.270837 | orchestrator | Friday 27 February 2026 01:10:06 +0000 (0:00:04.107) 0:00:14.512 ******* 2026-02-27 01:19:01.270847 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:19:01.270858 | orchestrator | 2026-02-27 01:19:01.270869 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-02-27 01:19:01.270879 | orchestrator | Friday 27 February 2026 01:10:07 +0000 (0:00:01.042) 0:00:15.554 ******* 2026-02-27 01:19:01.270890 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:19:01.270900 | orchestrator | 2026-02-27 01:19:01.270911 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-02-27 01:19:01.270922 | orchestrator | Friday 27 February 2026 01:10:08 +0000 (0:00:01.655) 0:00:17.209 ******* 2026-02-27 01:19:01.270932 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:19:01.270943 | orchestrator | 2026-02-27 01:19:01.270953 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-27 01:19:01.270964 | orchestrator | Friday 27 February 2026 01:10:13 +0000 (0:00:04.319) 0:00:21.529 ******* 2026-02-27 01:19:01.270974 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:01.270985 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.271121 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.271169 | orchestrator | 2026-02-27 01:19:01.271237 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-02-27 01:19:01.271258 | orchestrator | Friday 27 February 2026 01:10:13 +0000 (0:00:00.470) 0:00:21.999 ******* 2026-02-27 01:19:01.271278 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:19:01.271297 | orchestrator | 2026-02-27 01:19:01.271315 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-02-27 01:19:01.271330 | orchestrator | Friday 27 February 2026 01:10:48 +0000 (0:00:34.516) 0:00:56.516 ******* 2026-02-27 01:19:01.271341 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:19:01.271352 | orchestrator | 2026-02-27 01:19:01.271362 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-27 01:19:01.271373 | orchestrator | Friday 27 February 2026 01:11:05 +0000 (0:00:17.680) 0:01:14.196 ******* 2026-02-27 01:19:01.271384 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:19:01.271394 | orchestrator | 2026-02-27 01:19:01.271405 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-27 01:19:01.271416 | orchestrator | Friday 27 February 2026 01:11:23 +0000 (0:00:17.965) 0:01:32.162 ******* 2026-02-27 01:19:01.271446 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:19:01.271458 | orchestrator | 2026-02-27 01:19:01.271469 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-02-27 01:19:01.271480 | orchestrator | Friday 27 February 2026 01:11:25 +0000 (0:00:01.562) 0:01:33.724 ******* 2026-02-27 01:19:01.271490 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:01.271501 | orchestrator | 2026-02-27 01:19:01.271512 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-27 01:19:01.271523 | orchestrator | Friday 27 February 2026 01:11:25 +0000 (0:00:00.589) 0:01:34.313 ******* 2026-02-27 01:19:01.271534 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:19:01.271545 | orchestrator | 2026-02-27 01:19:01.271555 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-02-27 01:19:01.271566 | orchestrator | Friday 27 February 2026 01:11:26 +0000 (0:00:00.575) 0:01:34.889 ******* 2026-02-27 01:19:01.271577 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:19:01.271587 | orchestrator | 2026-02-27 01:19:01.271609 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-02-27 01:19:01.271620 | orchestrator | Friday 27 February 2026 01:11:47 +0000 (0:00:21.241) 0:01:56.130 ******* 2026-02-27 01:19:01.271631 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:01.271642 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.271653 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.271663 | orchestrator | 2026-02-27 01:19:01.271674 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-02-27 01:19:01.271685 | orchestrator | 2026-02-27 01:19:01.271695 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-02-27 01:19:01.271706 | orchestrator | Friday 27 February 2026 01:11:48 +0000 (0:00:00.449) 0:01:56.579 ******* 2026-02-27 01:19:01.271717 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:19:01.271727 | orchestrator | 2026-02-27 01:19:01.271739 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-02-27 01:19:01.271749 | orchestrator | Friday 27 February 2026 01:11:49 +0000 (0:00:00.970) 0:01:57.550 ******* 2026-02-27 01:19:01.271760 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.271771 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.271782 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:19:01.271792 | orchestrator | 2026-02-27 01:19:01.271803 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-02-27 01:19:01.271814 | orchestrator | Friday 27 February 2026 01:11:51 +0000 (0:00:02.126) 0:01:59.676 ******* 2026-02-27 01:19:01.271824 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.271835 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.271846 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:19:01.271856 | orchestrator | 2026-02-27 01:19:01.271867 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-02-27 01:19:01.271878 | orchestrator | Friday 27 February 2026 01:11:53 +0000 (0:00:02.210) 0:02:01.886 ******* 2026-02-27 01:19:01.271888 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:01.271906 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.271917 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.271928 | orchestrator | 2026-02-27 01:19:01.271939 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-02-27 01:19:01.271949 | orchestrator | Friday 27 February 2026 01:11:53 +0000 (0:00:00.360) 0:02:02.247 ******* 2026-02-27 01:19:01.271960 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-27 01:19:01.271971 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.271981 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-27 01:19:01.271992 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.272003 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-27 01:19:01.272014 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-02-27 01:19:01.272024 | orchestrator | 2026-02-27 01:19:01.272035 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-02-27 01:19:01.272046 | orchestrator | Friday 27 February 2026 01:12:02 +0000 (0:00:08.522) 0:02:10.770 ******* 2026-02-27 01:19:01.272056 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:01.272067 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.272077 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.272088 | orchestrator | 2026-02-27 01:19:01.272101 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-02-27 01:19:01.272119 | orchestrator | Friday 27 February 2026 01:12:02 +0000 (0:00:00.450) 0:02:11.221 ******* 2026-02-27 01:19:01.272180 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-27 01:19:01.272199 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:01.272214 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-27 01:19:01.272225 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.272235 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-27 01:19:01.272255 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.272265 | orchestrator | 2026-02-27 01:19:01.272276 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-02-27 01:19:01.272287 | orchestrator | Friday 27 February 2026 01:12:04 +0000 (0:00:01.302) 0:02:12.523 ******* 2026-02-27 01:19:01.272298 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.272308 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.272319 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:19:01.272330 | orchestrator | 2026-02-27 01:19:01.272345 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-02-27 01:19:01.272364 | orchestrator | Friday 27 February 2026 01:12:05 +0000 (0:00:01.081) 0:02:13.605 ******* 2026-02-27 01:19:01.272383 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.272401 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.272419 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:19:01.272439 | orchestrator | 2026-02-27 01:19:01.272459 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-02-27 01:19:01.272476 | orchestrator | Friday 27 February 2026 01:12:06 +0000 (0:00:01.145) 0:02:14.751 ******* 2026-02-27 01:19:01.272493 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.272505 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.272527 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:19:01.272538 | orchestrator | 2026-02-27 01:19:01.272549 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-02-27 01:19:01.272560 | orchestrator | Friday 27 February 2026 01:12:08 +0000 (0:00:02.634) 0:02:17.386 ******* 2026-02-27 01:19:01.272571 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.272581 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.272592 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:19:01.272603 | orchestrator | 2026-02-27 01:19:01.272614 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-27 01:19:01.272624 | orchestrator | Friday 27 February 2026 01:12:30 +0000 (0:00:22.113) 0:02:39.500 ******* 2026-02-27 01:19:01.272635 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.272646 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.272657 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:19:01.272667 | orchestrator | 2026-02-27 01:19:01.272678 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-27 01:19:01.272689 | orchestrator | Friday 27 February 2026 01:12:46 +0000 (0:00:15.709) 0:02:55.209 ******* 2026-02-27 01:19:01.272700 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:19:01.272711 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.272722 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.272732 | orchestrator | 2026-02-27 01:19:01.272743 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-02-27 01:19:01.272754 | orchestrator | Friday 27 February 2026 01:12:47 +0000 (0:00:01.091) 0:02:56.301 ******* 2026-02-27 01:19:01.272765 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.272775 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.272786 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:19:01.272796 | orchestrator | 2026-02-27 01:19:01.272807 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-02-27 01:19:01.272818 | orchestrator | Friday 27 February 2026 01:13:02 +0000 (0:00:14.620) 0:03:10.922 ******* 2026-02-27 01:19:01.272829 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:01.272840 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.272851 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.272861 | orchestrator | 2026-02-27 01:19:01.272872 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-02-27 01:19:01.272883 | orchestrator | Friday 27 February 2026 01:13:03 +0000 (0:00:01.134) 0:03:12.056 ******* 2026-02-27 01:19:01.272894 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:01.272904 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.272916 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.272945 | orchestrator | 2026-02-27 01:19:01.272965 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-02-27 01:19:01.272983 | orchestrator | 2026-02-27 01:19:01.273001 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-27 01:19:01.273013 | orchestrator | Friday 27 February 2026 01:13:04 +0000 (0:00:00.574) 0:03:12.630 ******* 2026-02-27 01:19:01.273024 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:19:01.273036 | orchestrator | 2026-02-27 01:19:01.273053 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-02-27 01:19:01.273064 | orchestrator | Friday 27 February 2026 01:13:04 +0000 (0:00:00.591) 0:03:13.222 ******* 2026-02-27 01:19:01.273075 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-02-27 01:19:01.273085 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-02-27 01:19:01.273096 | orchestrator | 2026-02-27 01:19:01.273107 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-02-27 01:19:01.273117 | orchestrator | Friday 27 February 2026 01:13:08 +0000 (0:00:03.807) 0:03:17.030 ******* 2026-02-27 01:19:01.273154 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-02-27 01:19:01.273172 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-02-27 01:19:01.273184 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-02-27 01:19:01.273195 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-02-27 01:19:01.273206 | orchestrator | 2026-02-27 01:19:01.273217 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-02-27 01:19:01.273227 | orchestrator | Friday 27 February 2026 01:13:15 +0000 (0:00:07.312) 0:03:24.342 ******* 2026-02-27 01:19:01.273238 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-27 01:19:01.273249 | orchestrator | 2026-02-27 01:19:01.273260 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-02-27 01:19:01.273271 | orchestrator | Friday 27 February 2026 01:13:19 +0000 (0:00:03.738) 0:03:28.081 ******* 2026-02-27 01:19:01.273281 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-27 01:19:01.273292 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-02-27 01:19:01.273308 | orchestrator | 2026-02-27 01:19:01.273326 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-02-27 01:19:01.273347 | orchestrator | Friday 27 February 2026 01:13:24 +0000 (0:00:04.559) 0:03:32.641 ******* 2026-02-27 01:19:01.273365 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-27 01:19:01.273384 | orchestrator | 2026-02-27 01:19:01.273396 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-02-27 01:19:01.273415 | orchestrator | Friday 27 February 2026 01:13:27 +0000 (0:00:03.087) 0:03:35.729 ******* 2026-02-27 01:19:01.273434 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-02-27 01:19:01.273452 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-02-27 01:19:01.273471 | orchestrator | 2026-02-27 01:19:01.273488 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-02-27 01:19:01.273510 | orchestrator | Friday 27 February 2026 01:13:34 +0000 (0:00:07.147) 0:03:42.876 ******* 2026-02-27 01:19:01.273527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-27 01:19:01.273559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-27 01:19:01.273574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-27 01:19:01.273595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:01.273609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:01.273627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:01.273639 | orchestrator | 2026-02-27 01:19:01.273650 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-02-27 01:19:01.273661 | orchestrator | Friday 27 February 2026 01:13:35 +0000 (0:00:01.467) 0:03:44.343 ******* 2026-02-27 01:19:01.273672 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:01.273682 | orchestrator | 2026-02-27 01:19:01.273693 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-02-27 01:19:01.273704 | orchestrator | Friday 27 February 2026 01:13:35 +0000 (0:00:00.148) 0:03:44.491 ******* 2026-02-27 01:19:01.273715 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:01.273726 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.273736 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.273747 | orchestrator | 2026-02-27 01:19:01.273763 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-02-27 01:19:01.273774 | orchestrator | Friday 27 February 2026 01:13:36 +0000 (0:00:00.366) 0:03:44.858 ******* 2026-02-27 01:19:01.273785 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-27 01:19:01.273796 | orchestrator | 2026-02-27 01:19:01.273806 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-02-27 01:19:01.273817 | orchestrator | Friday 27 February 2026 01:13:37 +0000 (0:00:00.972) 0:03:45.831 ******* 2026-02-27 01:19:01.273828 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:01.273839 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.273850 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.273860 | orchestrator | 2026-02-27 01:19:01.273871 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-27 01:19:01.273882 | orchestrator | Friday 27 February 2026 01:13:37 +0000 (0:00:00.328) 0:03:46.159 ******* 2026-02-27 01:19:01.273893 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:19:01.273904 | orchestrator | 2026-02-27 01:19:01.273915 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-02-27 01:19:01.273925 | orchestrator | Friday 27 February 2026 01:13:38 +0000 (0:00:00.575) 0:03:46.735 ******* 2026-02-27 01:19:01.273944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-27 01:19:01.273964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-27 01:19:01.273982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-27 01:19:01.273995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:01.274007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:01.274072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:01.274086 | orchestrator | 2026-02-27 01:19:01.274097 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-02-27 01:19:01.274108 | orchestrator | Friday 27 February 2026 01:13:41 +0000 (0:00:03.057) 0:03:49.793 ******* 2026-02-27 01:19:01.274120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-27 01:19:01.274167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-27 01:19:01.274181 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:01.274193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-27 01:19:01.274219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-27 01:19:01.274231 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.274243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-27 01:19:01.274255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-27 01:19:01.274266 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.274277 | orchestrator | 2026-02-27 01:19:01.274293 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-02-27 01:19:01.274304 | orchestrator | Friday 27 February 2026 01:13:41 +0000 (0:00:00.634) 0:03:50.428 ******* 2026-02-27 01:19:01.274315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-27 01:19:01.274334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-27 01:19:01.274346 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:01.274366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-27 01:19:01.274379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-27 01:19:01.274390 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.274407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-27 01:19:01.274426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-27 01:19:01.274437 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.274448 | orchestrator | 2026-02-27 01:19:01.274459 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-02-27 01:19:01.274470 | orchestrator | Friday 27 February 2026 01:13:42 +0000 (0:00:00.890) 0:03:51.319 ******* 2026-02-27 01:19:01.274489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-27 01:19:01.274506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-27 01:19:01.274519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-27 01:19:01.274544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:01.274557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:01.274568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:01.274579 | orchestrator | 2026-02-27 01:19:01.274590 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-02-27 01:19:01.274601 | orchestrator | Friday 27 February 2026 01:13:45 +0000 (0:00:02.673) 0:03:53.993 ******* 2026-02-27 01:19:01.274618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-27 01:19:01.274639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-27 01:19:01.274659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-27 01:19:01.274672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:01.274688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:01.274700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:01.274719 | orchestrator | 2026-02-27 01:19:01.274730 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-02-27 01:19:01.274741 | orchestrator | Friday 27 February 2026 01:13:51 +0000 (0:00:06.174) 0:04:00.167 ******* 2026-02-27 01:19:01.274759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-27 01:19:01.274775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-27 01:19:01.274794 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:01.274814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-27 01:19:01.274842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-27 01:19:01.274874 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.274894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-27 01:19:01.274925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-27 01:19:01.274938 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.274949 | orchestrator | 2026-02-27 01:19:01.274960 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-02-27 01:19:01.274970 | orchestrator | Friday 27 February 2026 01:13:52 +0000 (0:00:00.632) 0:04:00.800 ******* 2026-02-27 01:19:01.274981 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:19:01.274992 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:19:01.275002 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:19:01.275013 | orchestrator | 2026-02-27 01:19:01.275027 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-02-27 01:19:01.275046 | orchestrator | Friday 27 February 2026 01:13:53 +0000 (0:00:01.426) 0:04:02.227 ******* 2026-02-27 01:19:01.275065 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:01.275085 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.275103 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.275122 | orchestrator | 2026-02-27 01:19:01.275167 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-02-27 01:19:01.275187 | orchestrator | Friday 27 February 2026 01:13:54 +0000 (0:00:00.354) 0:04:02.581 ******* 2026-02-27 01:19:01.275207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-27 01:19:01.275232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-27 01:19:01.275255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-27 01:19:01.275268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:01.275286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:01.275302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:01.275314 | orchestrator | 2026-02-27 01:19:01.275325 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-27 01:19:01.275336 | orchestrator | Friday 27 February 2026 01:13:56 +0000 (0:00:02.219) 0:04:04.801 ******* 2026-02-27 01:19:01.275347 | orchestrator | 2026-02-27 01:19:01.275358 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-27 01:19:01.275369 | orchestrator | Friday 27 February 2026 01:13:56 +0000 (0:00:00.135) 0:04:04.937 ******* 2026-02-27 01:19:01.275380 | orchestrator | 2026-02-27 01:19:01.275390 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-27 01:19:01.275401 | orchestrator | Friday 27 February 2026 01:13:56 +0000 (0:00:00.133) 0:04:05.070 ******* 2026-02-27 01:19:01.275414 | orchestrator | 2026-02-27 01:19:01.275433 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-02-27 01:19:01.275452 | orchestrator | Friday 27 February 2026 01:13:56 +0000 (0:00:00.132) 0:04:05.203 ******* 2026-02-27 01:19:01.275503 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:19:01.275515 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:19:01.275525 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:19:01.275536 | orchestrator | 2026-02-27 01:19:01.275549 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-02-27 01:19:01.275567 | orchestrator | Friday 27 February 2026 01:14:19 +0000 (0:00:22.593) 0:04:27.797 ******* 2026-02-27 01:19:01.275586 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:19:01.275605 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:19:01.275624 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:19:01.275641 | orchestrator | 2026-02-27 01:19:01.275660 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-02-27 01:19:01.275678 | orchestrator | 2026-02-27 01:19:01.275694 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-27 01:19:01.275706 | orchestrator | Friday 27 February 2026 01:14:29 +0000 (0:00:10.369) 0:04:38.167 ******* 2026-02-27 01:19:01.275717 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:19:01.275729 | orchestrator | 2026-02-27 01:19:01.275747 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-27 01:19:01.275759 | orchestrator | Friday 27 February 2026 01:14:30 +0000 (0:00:01.319) 0:04:39.486 ******* 2026-02-27 01:19:01.275770 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:19:01.275780 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:19:01.275792 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:19:01.275811 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:01.275829 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.275848 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.275880 | orchestrator | 2026-02-27 01:19:01.275899 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-02-27 01:19:01.275918 | orchestrator | Friday 27 February 2026 01:14:31 +0000 (0:00:00.647) 0:04:40.134 ******* 2026-02-27 01:19:01.275936 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:01.275954 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.275973 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.275991 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-27 01:19:01.276010 | orchestrator | 2026-02-27 01:19:01.276030 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-27 01:19:01.276048 | orchestrator | Friday 27 February 2026 01:14:32 +0000 (0:00:01.062) 0:04:41.197 ******* 2026-02-27 01:19:01.276067 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-02-27 01:19:01.276085 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-02-27 01:19:01.276102 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-02-27 01:19:01.276114 | orchestrator | 2026-02-27 01:19:01.276125 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-27 01:19:01.276206 | orchestrator | Friday 27 February 2026 01:14:33 +0000 (0:00:00.887) 0:04:42.084 ******* 2026-02-27 01:19:01.276217 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-02-27 01:19:01.276229 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-02-27 01:19:01.276240 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-02-27 01:19:01.276250 | orchestrator | 2026-02-27 01:19:01.276262 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-27 01:19:01.276281 | orchestrator | Friday 27 February 2026 01:14:35 +0000 (0:00:01.572) 0:04:43.657 ******* 2026-02-27 01:19:01.276301 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-02-27 01:19:01.276319 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:19:01.276337 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-02-27 01:19:01.276352 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:19:01.276363 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-02-27 01:19:01.276377 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:19:01.276396 | orchestrator | 2026-02-27 01:19:01.276416 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-02-27 01:19:01.276435 | orchestrator | Friday 27 February 2026 01:14:35 +0000 (0:00:00.717) 0:04:44.375 ******* 2026-02-27 01:19:01.276461 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-27 01:19:01.276476 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-27 01:19:01.276487 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:01.276498 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-27 01:19:01.276509 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-27 01:19:01.276519 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.276530 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-27 01:19:01.276541 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-27 01:19:01.276552 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-27 01:19:01.276562 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-27 01:19:01.276573 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.276584 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-27 01:19:01.276594 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-27 01:19:01.276605 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-27 01:19:01.276616 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-27 01:19:01.276636 | orchestrator | 2026-02-27 01:19:01.276647 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-02-27 01:19:01.276658 | orchestrator | Friday 27 February 2026 01:14:37 +0000 (0:00:01.712) 0:04:46.087 ******* 2026-02-27 01:19:01.276668 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:01.276679 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.276690 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.276701 | orchestrator | changed: [testbed-node-3] 2026-02-27 01:19:01.276711 | orchestrator | changed: [testbed-node-4] 2026-02-27 01:19:01.276722 | orchestrator | changed: [testbed-node-5] 2026-02-27 01:19:01.276733 | orchestrator | 2026-02-27 01:19:01.276743 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-02-27 01:19:01.276752 | orchestrator | Friday 27 February 2026 01:14:38 +0000 (0:00:01.294) 0:04:47.382 ******* 2026-02-27 01:19:01.276762 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:01.276771 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.276781 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.276790 | orchestrator | changed: [testbed-node-3] 2026-02-27 01:19:01.276800 | orchestrator | changed: [testbed-node-4] 2026-02-27 01:19:01.276809 | orchestrator | changed: [testbed-node-5] 2026-02-27 01:19:01.276818 | orchestrator | 2026-02-27 01:19:01.276828 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-02-27 01:19:01.276838 | orchestrator | Friday 27 February 2026 01:14:40 +0000 (0:00:01.801) 0:04:49.184 ******* 2026-02-27 01:19:01.277469 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-27 01:19:01.277503 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-27 01:19:01.277523 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-27 01:19:01.278184 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-27 01:19:01.278237 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-27 01:19:01.278298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-27 01:19:01.278306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-27 01:19:01.278311 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-27 01:19:01.278324 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:01.278340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-27 01:19:01.278344 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:01.278365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:01.278369 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:01.278373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:01.278380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:01.278391 | orchestrator | 2026-02-27 01:19:01.278396 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-27 01:19:01.278400 | orchestrator | Friday 27 February 2026 01:14:42 +0000 (0:00:02.233) 0:04:51.417 ******* 2026-02-27 01:19:01.278404 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:19:01.278409 | orchestrator | 2026-02-27 01:19:01.278413 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-02-27 01:19:01.278417 | orchestrator | Friday 27 February 2026 01:14:44 +0000 (0:00:01.335) 0:04:52.753 ******* 2026-02-27 01:19:01.278421 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-27 01:19:01.278439 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-27 01:19:01.278444 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-27 01:19:01.278448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-27 01:19:01.278458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-27 01:19:01.278462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-27 01:19:01.278466 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-27 01:19:01.278482 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-27 01:19:01.278486 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-27 01:19:01.278490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:01.278495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:01.278506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:01.278510 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:01.278526 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:01.278531 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:01.278535 | orchestrator | 2026-02-27 01:19:01.278539 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-02-27 01:19:01.278543 | orchestrator | Friday 27 February 2026 01:14:48 +0000 (0:00:04.276) 0:04:57.029 ******* 2026-02-27 01:19:01.278547 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-27 01:19:01.278557 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-27 01:19:01.278561 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-27 01:19:01.278576 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-27 01:19:01.278581 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:19:01.278585 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-27 01:19:01.278589 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-27 01:19:01.278596 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:19:01.278603 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-27 01:19:01.278607 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-27 01:19:01.278611 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-27 01:19:01.278615 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:19:01.278632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-27 01:19:01.278636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-27 01:19:01.278643 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:01.278647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-27 01:19:01.278653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-27 01:19:01.278657 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.278661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-27 01:19:01.278665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-27 01:19:01.278669 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.278673 | orchestrator | 2026-02-27 01:19:01.278677 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-02-27 01:19:01.278681 | orchestrator | Friday 27 February 2026 01:14:49 +0000 (0:00:01.414) 0:04:58.443 ******* 2026-02-27 01:19:01.278697 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-27 01:19:01.278702 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-27 01:19:01.278711 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-27 01:19:01.278715 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:19:01.278721 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-27 01:19:01.278726 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-27 01:19:01.278742 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-27 01:19:01.278746 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:19:01.278750 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-27 01:19:01.278757 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-27 01:19:01.278764 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-27 01:19:01.278768 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:19:01.278772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-27 01:19:01.278776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-27 01:19:01.278779 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:01.278794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-27 01:19:01.278802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-27 01:19:01.278806 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.278810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-27 01:19:01.278817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-27 01:19:01.278821 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.278825 | orchestrator | 2026-02-27 01:19:01.278829 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-27 01:19:01.278833 | orchestrator | Friday 27 February 2026 01:14:52 +0000 (0:00:02.417) 0:05:00.861 ******* 2026-02-27 01:19:01.278837 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:01.278840 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.278844 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.278848 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-27 01:19:01.278852 | orchestrator | 2026-02-27 01:19:01.278855 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-02-27 01:19:01.278859 | orchestrator | Friday 27 February 2026 01:14:53 +0000 (0:00:01.157) 0:05:02.018 ******* 2026-02-27 01:19:01.278863 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-27 01:19:01.278867 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-27 01:19:01.278871 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-27 01:19:01.278875 | orchestrator | 2026-02-27 01:19:01.278878 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-02-27 01:19:01.278882 | orchestrator | Friday 27 February 2026 01:14:54 +0000 (0:00:00.983) 0:05:03.002 ******* 2026-02-27 01:19:01.278886 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-27 01:19:01.278890 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-27 01:19:01.278893 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-27 01:19:01.278897 | orchestrator | 2026-02-27 01:19:01.278901 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-02-27 01:19:01.278905 | orchestrator | Friday 27 February 2026 01:14:55 +0000 (0:00:01.096) 0:05:04.098 ******* 2026-02-27 01:19:01.278908 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:19:01.278912 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:19:01.278916 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:19:01.278922 | orchestrator | 2026-02-27 01:19:01.278926 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-02-27 01:19:01.278930 | orchestrator | Friday 27 February 2026 01:14:56 +0000 (0:00:00.803) 0:05:04.902 ******* 2026-02-27 01:19:01.278934 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:19:01.278938 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:19:01.278942 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:19:01.278945 | orchestrator | 2026-02-27 01:19:01.278949 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-02-27 01:19:01.278953 | orchestrator | Friday 27 February 2026 01:14:57 +0000 (0:00:00.794) 0:05:05.696 ******* 2026-02-27 01:19:01.278957 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-27 01:19:01.278961 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-27 01:19:01.278964 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-27 01:19:01.278968 | orchestrator | 2026-02-27 01:19:01.278972 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-02-27 01:19:01.278988 | orchestrator | Friday 27 February 2026 01:14:58 +0000 (0:00:01.247) 0:05:06.943 ******* 2026-02-27 01:19:01.278993 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-27 01:19:01.278997 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-27 01:19:01.279000 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-27 01:19:01.279004 | orchestrator | 2026-02-27 01:19:01.279008 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-02-27 01:19:01.279012 | orchestrator | Friday 27 February 2026 01:14:59 +0000 (0:00:01.227) 0:05:08.171 ******* 2026-02-27 01:19:01.279015 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-27 01:19:01.279019 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-27 01:19:01.279023 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-27 01:19:01.279027 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-02-27 01:19:01.279031 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-02-27 01:19:01.279035 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-02-27 01:19:01.279038 | orchestrator | 2026-02-27 01:19:01.279042 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-02-27 01:19:01.279046 | orchestrator | Friday 27 February 2026 01:15:03 +0000 (0:00:03.771) 0:05:11.943 ******* 2026-02-27 01:19:01.279050 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:19:01.279054 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:19:01.279057 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:19:01.279061 | orchestrator | 2026-02-27 01:19:01.279065 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-02-27 01:19:01.279069 | orchestrator | Friday 27 February 2026 01:15:03 +0000 (0:00:00.457) 0:05:12.401 ******* 2026-02-27 01:19:01.279072 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:19:01.279076 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:19:01.279080 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:19:01.279084 | orchestrator | 2026-02-27 01:19:01.279087 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-02-27 01:19:01.279091 | orchestrator | Friday 27 February 2026 01:15:04 +0000 (0:00:00.302) 0:05:12.703 ******* 2026-02-27 01:19:01.279095 | orchestrator | changed: [testbed-node-3] 2026-02-27 01:19:01.279099 | orchestrator | changed: [testbed-node-4] 2026-02-27 01:19:01.279102 | orchestrator | changed: [testbed-node-5] 2026-02-27 01:19:01.279106 | orchestrator | 2026-02-27 01:19:01.279110 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-02-27 01:19:01.279114 | orchestrator | Friday 27 February 2026 01:15:05 +0000 (0:00:01.264) 0:05:13.967 ******* 2026-02-27 01:19:01.279118 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-27 01:19:01.279122 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-27 01:19:01.279150 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-27 01:19:01.279155 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-27 01:19:01.279159 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-27 01:19:01.279163 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-27 01:19:01.279166 | orchestrator | 2026-02-27 01:19:01.279170 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-02-27 01:19:01.279174 | orchestrator | Friday 27 February 2026 01:15:08 +0000 (0:00:03.335) 0:05:17.303 ******* 2026-02-27 01:19:01.279178 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-27 01:19:01.279182 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-27 01:19:01.279186 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-27 01:19:01.279189 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-27 01:19:01.279193 | orchestrator | changed: [testbed-node-4] 2026-02-27 01:19:01.279197 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-27 01:19:01.279201 | orchestrator | changed: [testbed-node-3] 2026-02-27 01:19:01.279205 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-27 01:19:01.279208 | orchestrator | changed: [testbed-node-5] 2026-02-27 01:19:01.279212 | orchestrator | 2026-02-27 01:19:01.279216 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-02-27 01:19:01.279220 | orchestrator | Friday 27 February 2026 01:15:12 +0000 (0:00:03.354) 0:05:20.657 ******* 2026-02-27 01:19:01.279223 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:19:01.279227 | orchestrator | 2026-02-27 01:19:01.279231 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-02-27 01:19:01.279235 | orchestrator | Friday 27 February 2026 01:15:12 +0000 (0:00:00.162) 0:05:20.820 ******* 2026-02-27 01:19:01.279238 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:19:01.279242 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:19:01.279246 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:19:01.279250 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:01.279253 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.279257 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.279261 | orchestrator | 2026-02-27 01:19:01.279264 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-02-27 01:19:01.279268 | orchestrator | Friday 27 February 2026 01:15:12 +0000 (0:00:00.613) 0:05:21.434 ******* 2026-02-27 01:19:01.279272 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-27 01:19:01.279276 | orchestrator | 2026-02-27 01:19:01.279279 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-02-27 01:19:01.279296 | orchestrator | Friday 27 February 2026 01:15:13 +0000 (0:00:00.725) 0:05:22.159 ******* 2026-02-27 01:19:01.279301 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:19:01.279305 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:19:01.279308 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:19:01.279312 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:01.279316 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.279320 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.279324 | orchestrator | 2026-02-27 01:19:01.279328 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-02-27 01:19:01.279331 | orchestrator | Friday 27 February 2026 01:15:14 +0000 (0:00:00.821) 0:05:22.981 ******* 2026-02-27 01:19:01.279335 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-27 01:19:01.279346 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-27 01:19:01.279350 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-27 01:19:01.279354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-27 01:19:01.279361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-27 01:19:01.279365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-27 01:19:01.279372 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-27 01:19:01.279376 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-27 01:19:01.279382 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-27 01:19:01.279386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:01.279390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:01.279400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:01.279409 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:01.279413 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:01.279419 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:01.279423 | orchestrator | 2026-02-27 01:19:01.279427 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-02-27 01:19:01.279432 | orchestrator | Friday 27 February 2026 01:15:18 +0000 (0:00:04.037) 0:05:27.019 ******* 2026-02-27 01:19:01.279436 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-27 01:19:01.279442 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-27 01:19:01.279451 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-27 01:19:01.279455 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-27 01:19:01.279461 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-27 01:19:01.279465 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-27 01:19:01.279469 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:01.279480 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:01.279485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-27 01:19:01.279489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-27 01:19:01.279493 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:01.279497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-27 01:19:01.279501 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:01.279510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:01.279538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:01.279543 | orchestrator | 2026-02-27 01:19:01.279549 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-02-27 01:19:01.279555 | orchestrator | Friday 27 February 2026 01:15:25 +0000 (0:00:06.756) 0:05:33.775 ******* 2026-02-27 01:19:01.279562 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:19:01.279569 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:19:01.279578 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:19:01.279583 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:01.279589 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.279595 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.279601 | orchestrator | 2026-02-27 01:19:01.279607 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-02-27 01:19:01.279613 | orchestrator | Friday 27 February 2026 01:15:26 +0000 (0:00:01.369) 0:05:35.144 ******* 2026-02-27 01:19:01.279620 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-27 01:19:01.279625 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-27 01:19:01.279631 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-27 01:19:01.279636 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-27 01:19:01.279647 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-27 01:19:01.279653 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-27 01:19:01.279660 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:01.279666 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-27 01:19:01.279672 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.279678 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-27 01:19:01.279685 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.279691 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-27 01:19:01.279697 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-27 01:19:01.279704 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-27 01:19:01.279708 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-27 01:19:01.279712 | orchestrator | 2026-02-27 01:19:01.279717 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-02-27 01:19:01.279726 | orchestrator | Friday 27 February 2026 01:15:30 +0000 (0:00:03.983) 0:05:39.128 ******* 2026-02-27 01:19:01.279730 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:19:01.279734 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:19:01.279737 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:19:01.279741 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:01.279745 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.279749 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.279752 | orchestrator | 2026-02-27 01:19:01.279756 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-02-27 01:19:01.279760 | orchestrator | Friday 27 February 2026 01:15:31 +0000 (0:00:00.661) 0:05:39.790 ******* 2026-02-27 01:19:01.279764 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-27 01:19:01.279768 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-27 01:19:01.279771 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-27 01:19:01.279775 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-27 01:19:01.279779 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-27 01:19:01.279786 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-27 01:19:01.279790 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-27 01:19:01.279794 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-27 01:19:01.279798 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-27 01:19:01.279802 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-27 01:19:01.279805 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.279809 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-27 01:19:01.279813 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:01.279817 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-27 01:19:01.279821 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.279825 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-27 01:19:01.279828 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-27 01:19:01.279832 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-27 01:19:01.279836 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-27 01:19:01.279840 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-27 01:19:01.279843 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-27 01:19:01.279847 | orchestrator | 2026-02-27 01:19:01.279851 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-02-27 01:19:01.279855 | orchestrator | Friday 27 February 2026 01:15:36 +0000 (0:00:04.991) 0:05:44.782 ******* 2026-02-27 01:19:01.279859 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-27 01:19:01.279862 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-27 01:19:01.279872 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-27 01:19:01.279875 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-27 01:19:01.279879 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-27 01:19:01.279883 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-27 01:19:01.279887 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-27 01:19:01.279891 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-27 01:19:01.279894 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-27 01:19:01.279898 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-27 01:19:01.279902 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-27 01:19:01.279906 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-27 01:19:01.279909 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-27 01:19:01.279913 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-27 01:19:01.279917 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.279920 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-27 01:19:01.279924 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:01.279928 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-27 01:19:01.279932 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.279936 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-27 01:19:01.279940 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-27 01:19:01.279944 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-27 01:19:01.279947 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-27 01:19:01.279951 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-27 01:19:01.279955 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-27 01:19:01.279959 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-27 01:19:01.279963 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-27 01:19:01.279967 | orchestrator | 2026-02-27 01:19:01.279973 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-02-27 01:19:01.279977 | orchestrator | Friday 27 February 2026 01:15:43 +0000 (0:00:07.710) 0:05:52.492 ******* 2026-02-27 01:19:01.279981 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:19:01.279985 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:19:01.279988 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:19:01.279992 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:01.279996 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.279999 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.280003 | orchestrator | 2026-02-27 01:19:01.280007 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-02-27 01:19:01.280011 | orchestrator | Friday 27 February 2026 01:15:44 +0000 (0:00:00.846) 0:05:53.339 ******* 2026-02-27 01:19:01.280015 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:19:01.280019 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:19:01.280022 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:19:01.280026 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:01.280034 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.280037 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.280041 | orchestrator | 2026-02-27 01:19:01.280045 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-02-27 01:19:01.280049 | orchestrator | Friday 27 February 2026 01:15:45 +0000 (0:00:00.632) 0:05:53.972 ******* 2026-02-27 01:19:01.280053 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:01.280056 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.280060 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.280064 | orchestrator | changed: [testbed-node-3] 2026-02-27 01:19:01.280068 | orchestrator | changed: [testbed-node-4] 2026-02-27 01:19:01.280071 | orchestrator | changed: [testbed-node-5] 2026-02-27 01:19:01.280075 | orchestrator | 2026-02-27 01:19:01.280079 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-02-27 01:19:01.280083 | orchestrator | Friday 27 February 2026 01:15:47 +0000 (0:00:02.174) 0:05:56.146 ******* 2026-02-27 01:19:01.280090 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-27 01:19:01.280094 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-27 01:19:01.280098 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-27 01:19:01.280102 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:19:01.280111 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-27 01:19:01.280118 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-27 01:19:01.280122 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-27 01:19:01.280126 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:19:01.280161 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-27 01:19:01.280165 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-27 01:19:01.280173 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-27 01:19:01.280181 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:19:01.280185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-27 01:19:01.280189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-27 01:19:01.280193 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:01.280197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-27 01:19:01.280216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-27 01:19:01.280221 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.280225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-27 01:19:01.280229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-27 01:19:01.280238 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.280242 | orchestrator | 2026-02-27 01:19:01.280246 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-02-27 01:19:01.280250 | orchestrator | Friday 27 February 2026 01:15:49 +0000 (0:00:01.383) 0:05:57.529 ******* 2026-02-27 01:19:01.280254 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-02-27 01:19:01.280261 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-02-27 01:19:01.280265 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:19:01.280269 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-02-27 01:19:01.280272 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-02-27 01:19:01.280276 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:19:01.280280 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-02-27 01:19:01.280285 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-02-27 01:19:01.280288 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:19:01.280292 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-02-27 01:19:01.280296 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-02-27 01:19:01.280300 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:01.280304 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-02-27 01:19:01.280307 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-02-27 01:19:01.280311 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.280315 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-02-27 01:19:01.280319 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-02-27 01:19:01.280323 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.280327 | orchestrator | 2026-02-27 01:19:01.280331 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-02-27 01:19:01.280334 | orchestrator | Friday 27 February 2026 01:15:49 +0000 (0:00:00.883) 0:05:58.412 ******* 2026-02-27 01:19:01.280338 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-27 01:19:01.280346 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-27 01:19:01.280350 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-27 01:19:01.280361 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-27 01:19:01.280366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-27 01:19:01.280370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-27 01:19:01.280376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-27 01:19:01.280380 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-27 01:19:01.280384 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-27 01:19:01.280392 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:01.280399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:01.280403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:01.280408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:01.280414 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:01.280419 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:01.280426 | orchestrator | 2026-02-27 01:19:01.280431 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-27 01:19:01.280435 | orchestrator | Friday 27 February 2026 01:15:53 +0000 (0:00:03.234) 0:06:01.647 ******* 2026-02-27 01:19:01.280439 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:19:01.280442 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:19:01.280446 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:19:01.280450 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:01.280454 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.280458 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.280462 | orchestrator | 2026-02-27 01:19:01.280468 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-27 01:19:01.280474 | orchestrator | Friday 27 February 2026 01:15:53 +0000 (0:00:00.779) 0:06:02.426 ******* 2026-02-27 01:19:01.280481 | orchestrator | 2026-02-27 01:19:01.280487 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-27 01:19:01.280493 | orchestrator | Friday 27 February 2026 01:15:54 +0000 (0:00:00.138) 0:06:02.565 ******* 2026-02-27 01:19:01.280499 | orchestrator | 2026-02-27 01:19:01.280508 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-27 01:19:01.280515 | orchestrator | Friday 27 February 2026 01:15:54 +0000 (0:00:00.183) 0:06:02.749 ******* 2026-02-27 01:19:01.280520 | orchestrator | 2026-02-27 01:19:01.280527 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-27 01:19:01.280533 | orchestrator | Friday 27 February 2026 01:15:54 +0000 (0:00:00.152) 0:06:02.901 ******* 2026-02-27 01:19:01.280540 | orchestrator | 2026-02-27 01:19:01.280545 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-27 01:19:01.280551 | orchestrator | Friday 27 February 2026 01:15:54 +0000 (0:00:00.142) 0:06:03.044 ******* 2026-02-27 01:19:01.280556 | orchestrator | 2026-02-27 01:19:01.280563 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-27 01:19:01.280569 | orchestrator | Friday 27 February 2026 01:15:54 +0000 (0:00:00.145) 0:06:03.189 ******* 2026-02-27 01:19:01.280575 | orchestrator | 2026-02-27 01:19:01.280581 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-02-27 01:19:01.280587 | orchestrator | Friday 27 February 2026 01:15:54 +0000 (0:00:00.306) 0:06:03.495 ******* 2026-02-27 01:19:01.280593 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:19:01.280598 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:19:01.280604 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:19:01.280610 | orchestrator | 2026-02-27 01:19:01.280616 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-02-27 01:19:01.280622 | orchestrator | Friday 27 February 2026 01:16:07 +0000 (0:00:12.337) 0:06:15.833 ******* 2026-02-27 01:19:01.280629 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:19:01.280635 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:19:01.280643 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:19:01.280647 | orchestrator | 2026-02-27 01:19:01.280651 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-02-27 01:19:01.280654 | orchestrator | Friday 27 February 2026 01:16:24 +0000 (0:00:17.487) 0:06:33.321 ******* 2026-02-27 01:19:01.280658 | orchestrator | changed: [testbed-node-3] 2026-02-27 01:19:01.280662 | orchestrator | changed: [testbed-node-5] 2026-02-27 01:19:01.280671 | orchestrator | changed: [testbed-node-4] 2026-02-27 01:19:01.280674 | orchestrator | 2026-02-27 01:19:01.280678 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-02-27 01:19:01.280682 | orchestrator | Friday 27 February 2026 01:16:40 +0000 (0:00:16.085) 0:06:49.407 ******* 2026-02-27 01:19:01.280686 | orchestrator | changed: [testbed-node-3] 2026-02-27 01:19:01.280692 | orchestrator | changed: [testbed-node-5] 2026-02-27 01:19:01.280698 | orchestrator | changed: [testbed-node-4] 2026-02-27 01:19:01.280704 | orchestrator | 2026-02-27 01:19:01.280711 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-02-27 01:19:01.280717 | orchestrator | Friday 27 February 2026 01:17:09 +0000 (0:00:28.164) 0:07:17.571 ******* 2026-02-27 01:19:01.280722 | orchestrator | FAILED - RETRYING: [testbed-node-3]: Checking libvirt container is ready (10 retries left). 2026-02-27 01:19:01.280732 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2026-02-27 01:19:01.280738 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2026-02-27 01:19:01.280745 | orchestrator | changed: [testbed-node-3] 2026-02-27 01:19:01.280751 | orchestrator | changed: [testbed-node-5] 2026-02-27 01:19:01.280756 | orchestrator | changed: [testbed-node-4] 2026-02-27 01:19:01.280762 | orchestrator | 2026-02-27 01:19:01.280768 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-02-27 01:19:01.280774 | orchestrator | Friday 27 February 2026 01:17:15 +0000 (0:00:06.230) 0:07:23.801 ******* 2026-02-27 01:19:01.280780 | orchestrator | changed: [testbed-node-3] 2026-02-27 01:19:01.280786 | orchestrator | changed: [testbed-node-4] 2026-02-27 01:19:01.280793 | orchestrator | changed: [testbed-node-5] 2026-02-27 01:19:01.280799 | orchestrator | 2026-02-27 01:19:01.280805 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-02-27 01:19:01.280812 | orchestrator | Friday 27 February 2026 01:17:16 +0000 (0:00:00.789) 0:07:24.591 ******* 2026-02-27 01:19:01.280818 | orchestrator | changed: [testbed-node-4] 2026-02-27 01:19:01.280824 | orchestrator | changed: [testbed-node-3] 2026-02-27 01:19:01.280828 | orchestrator | changed: [testbed-node-5] 2026-02-27 01:19:01.280832 | orchestrator | 2026-02-27 01:19:01.280836 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-02-27 01:19:01.280840 | orchestrator | Friday 27 February 2026 01:17:42 +0000 (0:00:25.952) 0:07:50.543 ******* 2026-02-27 01:19:01.280843 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:19:01.280847 | orchestrator | 2026-02-27 01:19:01.280851 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-02-27 01:19:01.280855 | orchestrator | Friday 27 February 2026 01:17:42 +0000 (0:00:00.144) 0:07:50.688 ******* 2026-02-27 01:19:01.280859 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:01.280863 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:19:01.280866 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:19:01.280870 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.280874 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.280878 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-02-27 01:19:01.280882 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-27 01:19:01.280886 | orchestrator | 2026-02-27 01:19:01.280890 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-02-27 01:19:01.280894 | orchestrator | Friday 27 February 2026 01:18:05 +0000 (0:00:22.933) 0:08:13.622 ******* 2026-02-27 01:19:01.280897 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.280901 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:19:01.280905 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:19:01.280908 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:01.280912 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.280916 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:19:01.280924 | orchestrator | 2026-02-27 01:19:01.280928 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-02-27 01:19:01.280936 | orchestrator | Friday 27 February 2026 01:18:16 +0000 (0:00:11.401) 0:08:25.024 ******* 2026-02-27 01:19:01.280940 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:01.280944 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:19:01.280948 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.280952 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.280955 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:19:01.280959 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2026-02-27 01:19:01.280963 | orchestrator | 2026-02-27 01:19:01.280967 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-27 01:19:01.280971 | orchestrator | Friday 27 February 2026 01:18:21 +0000 (0:00:04.544) 0:08:29.568 ******* 2026-02-27 01:19:01.280975 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-27 01:19:01.280979 | orchestrator | 2026-02-27 01:19:01.280982 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-27 01:19:01.280986 | orchestrator | Friday 27 February 2026 01:18:35 +0000 (0:00:14.500) 0:08:44.068 ******* 2026-02-27 01:19:01.280990 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-27 01:19:01.280993 | orchestrator | 2026-02-27 01:19:01.280997 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-02-27 01:19:01.281001 | orchestrator | Friday 27 February 2026 01:18:36 +0000 (0:00:01.351) 0:08:45.420 ******* 2026-02-27 01:19:01.281005 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:19:01.281009 | orchestrator | 2026-02-27 01:19:01.281016 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-02-27 01:19:01.281022 | orchestrator | Friday 27 February 2026 01:18:38 +0000 (0:00:01.475) 0:08:46.896 ******* 2026-02-27 01:19:01.281027 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-27 01:19:01.281033 | orchestrator | 2026-02-27 01:19:01.281040 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-02-27 01:19:01.281046 | orchestrator | Friday 27 February 2026 01:18:51 +0000 (0:00:12.857) 0:08:59.753 ******* 2026-02-27 01:19:01.281052 | orchestrator | ok: [testbed-node-3] 2026-02-27 01:19:01.281059 | orchestrator | ok: [testbed-node-4] 2026-02-27 01:19:01.281064 | orchestrator | ok: [testbed-node-5] 2026-02-27 01:19:01.281071 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:19:01.281077 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:19:01.281081 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:19:01.281084 | orchestrator | 2026-02-27 01:19:01.281088 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-02-27 01:19:01.281092 | orchestrator | 2026-02-27 01:19:01.281096 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-02-27 01:19:01.281099 | orchestrator | Friday 27 February 2026 01:18:53 +0000 (0:00:02.024) 0:09:01.778 ******* 2026-02-27 01:19:01.281103 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:19:01.281107 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:19:01.281111 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:19:01.281115 | orchestrator | 2026-02-27 01:19:01.281118 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-02-27 01:19:01.281122 | orchestrator | 2026-02-27 01:19:01.281163 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-02-27 01:19:01.281168 | orchestrator | Friday 27 February 2026 01:18:54 +0000 (0:00:01.195) 0:09:02.974 ******* 2026-02-27 01:19:01.281172 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:01.281176 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.281180 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.281184 | orchestrator | 2026-02-27 01:19:01.281187 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-02-27 01:19:01.281191 | orchestrator | 2026-02-27 01:19:01.281195 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-02-27 01:19:01.281203 | orchestrator | Friday 27 February 2026 01:18:55 +0000 (0:00:00.563) 0:09:03.537 ******* 2026-02-27 01:19:01.281207 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-02-27 01:19:01.281211 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-02-27 01:19:01.281215 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-02-27 01:19:01.281218 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-02-27 01:19:01.281222 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-02-27 01:19:01.281226 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-02-27 01:19:01.281230 | orchestrator | skipping: [testbed-node-3] 2026-02-27 01:19:01.281234 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-02-27 01:19:01.281237 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-02-27 01:19:01.281241 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-02-27 01:19:01.281245 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-02-27 01:19:01.281249 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-02-27 01:19:01.281253 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-02-27 01:19:01.281257 | orchestrator | skipping: [testbed-node-4] 2026-02-27 01:19:01.281261 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-02-27 01:19:01.281265 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-02-27 01:19:01.281269 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-02-27 01:19:01.281273 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-02-27 01:19:01.281276 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-02-27 01:19:01.281280 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-02-27 01:19:01.281284 | orchestrator | skipping: [testbed-node-5] 2026-02-27 01:19:01.281288 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-02-27 01:19:01.281292 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-02-27 01:19:01.281299 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-02-27 01:19:01.281309 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-02-27 01:19:01.281316 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-02-27 01:19:01.281321 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-02-27 01:19:01.281327 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:01.281334 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-02-27 01:19:01.281340 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-02-27 01:19:01.281346 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-02-27 01:19:01.281352 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-02-27 01:19:01.281358 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-02-27 01:19:01.281365 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-02-27 01:19:01.281371 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.281377 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-02-27 01:19:01.281382 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-02-27 01:19:01.281388 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-02-27 01:19:01.281394 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-02-27 01:19:01.281398 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-02-27 01:19:01.281401 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-02-27 01:19:01.281405 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.281409 | orchestrator | 2026-02-27 01:19:01.281413 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-02-27 01:19:01.281421 | orchestrator | 2026-02-27 01:19:01.281425 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-02-27 01:19:01.281428 | orchestrator | Friday 27 February 2026 01:18:56 +0000 (0:00:01.465) 0:09:05.003 ******* 2026-02-27 01:19:01.281432 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-02-27 01:19:01.281436 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-02-27 01:19:01.281440 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:01.281444 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-02-27 01:19:01.281447 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-02-27 01:19:01.281451 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.281455 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-02-27 01:19:01.281459 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-02-27 01:19:01.281462 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.281466 | orchestrator | 2026-02-27 01:19:01.281470 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-02-27 01:19:01.281474 | orchestrator | 2026-02-27 01:19:01.281478 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-02-27 01:19:01.281481 | orchestrator | Friday 27 February 2026 01:18:57 +0000 (0:00:00.832) 0:09:05.835 ******* 2026-02-27 01:19:01.281488 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:01.281492 | orchestrator | 2026-02-27 01:19:01.281495 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-02-27 01:19:01.281499 | orchestrator | 2026-02-27 01:19:01.281503 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-02-27 01:19:01.281507 | orchestrator | Friday 27 February 2026 01:18:58 +0000 (0:00:00.701) 0:09:06.537 ******* 2026-02-27 01:19:01.281510 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:01.281514 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:01.281518 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:01.281522 | orchestrator | 2026-02-27 01:19:01.281525 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 01:19:01.281529 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 01:19:01.281533 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-02-27 01:19:01.281538 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-02-27 01:19:01.281541 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-02-27 01:19:01.281545 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-27 01:19:01.281549 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-02-27 01:19:01.281553 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-27 01:19:01.281557 | orchestrator | 2026-02-27 01:19:01.281560 | orchestrator | 2026-02-27 01:19:01.281564 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 01:19:01.281568 | orchestrator | Friday 27 February 2026 01:18:58 +0000 (0:00:00.459) 0:09:06.997 ******* 2026-02-27 01:19:01.281572 | orchestrator | =============================================================================== 2026-02-27 01:19:01.281578 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 34.52s 2026-02-27 01:19:01.281585 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 28.16s 2026-02-27 01:19:01.281601 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 25.95s 2026-02-27 01:19:01.281608 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.93s 2026-02-27 01:19:01.281613 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 22.59s 2026-02-27 01:19:01.281620 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 22.11s 2026-02-27 01:19:01.281626 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 21.24s 2026-02-27 01:19:01.281632 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 17.97s 2026-02-27 01:19:01.281638 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 17.68s 2026-02-27 01:19:01.281644 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 17.49s 2026-02-27 01:19:01.281651 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 16.09s 2026-02-27 01:19:01.281657 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 15.71s 2026-02-27 01:19:01.281662 | orchestrator | nova-cell : Create cell ------------------------------------------------ 14.62s 2026-02-27 01:19:01.281669 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.50s 2026-02-27 01:19:01.281675 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 12.86s 2026-02-27 01:19:01.281682 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.34s 2026-02-27 01:19:01.281688 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 11.40s 2026-02-27 01:19:01.281694 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.37s 2026-02-27 01:19:01.281700 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.52s 2026-02-27 01:19:01.281706 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 7.71s 2026-02-27 01:19:01.281711 | orchestrator | 2026-02-27 01:19:01 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:19:01.281718 | orchestrator | 2026-02-27 01:19:01 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:19:04.311506 | orchestrator | 2026-02-27 01:19:04 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:19:04.311584 | orchestrator | 2026-02-27 01:19:04 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:19:07.353784 | orchestrator | 2026-02-27 01:19:07 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:19:07.355433 | orchestrator | 2026-02-27 01:19:07 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:19:10.398463 | orchestrator | 2026-02-27 01:19:10 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:19:10.398682 | orchestrator | 2026-02-27 01:19:10 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:19:13.444368 | orchestrator | 2026-02-27 01:19:13 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:19:13.444493 | orchestrator | 2026-02-27 01:19:13 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:19:16.495218 | orchestrator | 2026-02-27 01:19:16 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:19:16.495309 | orchestrator | 2026-02-27 01:19:16 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:19:19.534735 | orchestrator | 2026-02-27 01:19:19 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:19:19.534861 | orchestrator | 2026-02-27 01:19:19 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:19:22.587405 | orchestrator | 2026-02-27 01:19:22 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:19:22.587543 | orchestrator | 2026-02-27 01:19:22 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:19:25.630312 | orchestrator | 2026-02-27 01:19:25 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:19:25.630412 | orchestrator | 2026-02-27 01:19:25 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:19:28.679113 | orchestrator | 2026-02-27 01:19:28 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:19:28.679245 | orchestrator | 2026-02-27 01:19:28 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:19:31.720283 | orchestrator | 2026-02-27 01:19:31 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state STARTED 2026-02-27 01:19:31.720390 | orchestrator | 2026-02-27 01:19:31 | INFO  | Wait 1 second(s) until the next check 2026-02-27 01:19:34.772211 | orchestrator | 2026-02-27 01:19:34.772315 | orchestrator | 2026-02-27 01:19:34 | INFO  | Task 12e29d92-b0d3-44c5-99a6-0709db4bbddd is in state SUCCESS 2026-02-27 01:19:34.772331 | orchestrator | 2026-02-27 01:19:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-27 01:19:34.773605 | orchestrator | 2026-02-27 01:19:34.773645 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-27 01:19:34.773657 | orchestrator | 2026-02-27 01:19:34.773668 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-27 01:19:34.773679 | orchestrator | Friday 27 February 2026 01:14:31 +0000 (0:00:00.301) 0:00:00.301 ******* 2026-02-27 01:19:34.773690 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:19:34.773707 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:19:34.774231 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:19:34.774257 | orchestrator | 2026-02-27 01:19:34.774324 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-27 01:19:34.774350 | orchestrator | Friday 27 February 2026 01:14:31 +0000 (0:00:00.310) 0:00:00.611 ******* 2026-02-27 01:19:34.774373 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-02-27 01:19:34.774395 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-02-27 01:19:34.774413 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-02-27 01:19:34.774432 | orchestrator | 2026-02-27 01:19:34.774680 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-02-27 01:19:34.774691 | orchestrator | 2026-02-27 01:19:34.774702 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-27 01:19:34.774713 | orchestrator | Friday 27 February 2026 01:14:32 +0000 (0:00:00.482) 0:00:01.094 ******* 2026-02-27 01:19:34.774725 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:19:34.774736 | orchestrator | 2026-02-27 01:19:34.774747 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-02-27 01:19:34.774758 | orchestrator | Friday 27 February 2026 01:14:33 +0000 (0:00:00.644) 0:00:01.738 ******* 2026-02-27 01:19:34.774769 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-02-27 01:19:34.774780 | orchestrator | 2026-02-27 01:19:34.774791 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-02-27 01:19:34.774809 | orchestrator | Friday 27 February 2026 01:14:37 +0000 (0:00:03.961) 0:00:05.699 ******* 2026-02-27 01:19:34.774832 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-02-27 01:19:34.774859 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-02-27 01:19:34.774877 | orchestrator | 2026-02-27 01:19:34.774894 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-02-27 01:19:34.774911 | orchestrator | Friday 27 February 2026 01:14:43 +0000 (0:00:06.496) 0:00:12.195 ******* 2026-02-27 01:19:34.774929 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-27 01:19:34.774978 | orchestrator | 2026-02-27 01:19:34.774997 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-02-27 01:19:34.775016 | orchestrator | Friday 27 February 2026 01:14:47 +0000 (0:00:03.857) 0:00:16.052 ******* 2026-02-27 01:19:34.775033 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-27 01:19:34.775051 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-02-27 01:19:34.775088 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-02-27 01:19:34.775106 | orchestrator | 2026-02-27 01:19:34.775125 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-02-27 01:19:34.775139 | orchestrator | Friday 27 February 2026 01:14:55 +0000 (0:00:07.669) 0:00:23.721 ******* 2026-02-27 01:19:34.775150 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-27 01:19:34.775203 | orchestrator | 2026-02-27 01:19:34.775217 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-02-27 01:19:34.775228 | orchestrator | Friday 27 February 2026 01:14:58 +0000 (0:00:03.870) 0:00:27.591 ******* 2026-02-27 01:19:34.775238 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-02-27 01:19:34.775249 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-02-27 01:19:34.775259 | orchestrator | 2026-02-27 01:19:34.775270 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-02-27 01:19:34.775281 | orchestrator | Friday 27 February 2026 01:15:06 +0000 (0:00:07.627) 0:00:35.219 ******* 2026-02-27 01:19:34.775292 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-02-27 01:19:34.775304 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-02-27 01:19:34.775316 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-02-27 01:19:34.775328 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-02-27 01:19:34.775340 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-02-27 01:19:34.775353 | orchestrator | 2026-02-27 01:19:34.775365 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-27 01:19:34.775377 | orchestrator | Friday 27 February 2026 01:15:23 +0000 (0:00:17.048) 0:00:52.267 ******* 2026-02-27 01:19:34.775389 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:19:34.775401 | orchestrator | 2026-02-27 01:19:34.775413 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-02-27 01:19:34.775426 | orchestrator | Friday 27 February 2026 01:15:24 +0000 (0:00:00.625) 0:00:52.893 ******* 2026-02-27 01:19:34.775438 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:19:34.775450 | orchestrator | 2026-02-27 01:19:34.775463 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-02-27 01:19:34.775475 | orchestrator | Friday 27 February 2026 01:15:29 +0000 (0:00:05.797) 0:00:58.691 ******* 2026-02-27 01:19:34.775485 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:19:34.775496 | orchestrator | 2026-02-27 01:19:34.775507 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-02-27 01:19:34.775569 | orchestrator | Friday 27 February 2026 01:15:34 +0000 (0:00:04.773) 0:01:03.464 ******* 2026-02-27 01:19:34.775582 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:19:34.775593 | orchestrator | 2026-02-27 01:19:34.775604 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-02-27 01:19:34.775614 | orchestrator | Friday 27 February 2026 01:15:38 +0000 (0:00:03.635) 0:01:07.100 ******* 2026-02-27 01:19:34.775625 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-02-27 01:19:34.775636 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-02-27 01:19:34.775646 | orchestrator | 2026-02-27 01:19:34.775657 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-02-27 01:19:34.775667 | orchestrator | Friday 27 February 2026 01:15:50 +0000 (0:00:12.380) 0:01:19.480 ******* 2026-02-27 01:19:34.775678 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-02-27 01:19:34.775702 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-02-27 01:19:34.775715 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-02-27 01:19:34.775727 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-02-27 01:19:34.775738 | orchestrator | 2026-02-27 01:19:34.775748 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-02-27 01:19:34.775759 | orchestrator | Friday 27 February 2026 01:16:07 +0000 (0:00:17.091) 0:01:36.572 ******* 2026-02-27 01:19:34.775770 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:19:34.775780 | orchestrator | 2026-02-27 01:19:34.775791 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-02-27 01:19:34.775801 | orchestrator | Friday 27 February 2026 01:16:12 +0000 (0:00:05.004) 0:01:41.576 ******* 2026-02-27 01:19:34.775812 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:19:34.775823 | orchestrator | 2026-02-27 01:19:34.775833 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-02-27 01:19:34.775844 | orchestrator | Friday 27 February 2026 01:16:18 +0000 (0:00:05.671) 0:01:47.248 ******* 2026-02-27 01:19:34.775854 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:34.775865 | orchestrator | 2026-02-27 01:19:34.775876 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-02-27 01:19:34.775887 | orchestrator | Friday 27 February 2026 01:16:18 +0000 (0:00:00.231) 0:01:47.479 ******* 2026-02-27 01:19:34.775897 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:19:34.775908 | orchestrator | 2026-02-27 01:19:34.775919 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-27 01:19:34.775929 | orchestrator | Friday 27 February 2026 01:16:22 +0000 (0:00:03.697) 0:01:51.176 ******* 2026-02-27 01:19:34.775943 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:19:34.775961 | orchestrator | 2026-02-27 01:19:34.775987 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-02-27 01:19:34.776005 | orchestrator | Friday 27 February 2026 01:16:23 +0000 (0:00:01.046) 0:01:52.223 ******* 2026-02-27 01:19:34.776023 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:19:34.776042 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:19:34.776061 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:19:34.776080 | orchestrator | 2026-02-27 01:19:34.776098 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-02-27 01:19:34.776116 | orchestrator | Friday 27 February 2026 01:16:29 +0000 (0:00:05.921) 0:01:58.144 ******* 2026-02-27 01:19:34.776131 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:19:34.776142 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:19:34.776152 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:19:34.776223 | orchestrator | 2026-02-27 01:19:34.776236 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-02-27 01:19:34.776247 | orchestrator | Friday 27 February 2026 01:16:33 +0000 (0:00:04.522) 0:02:02.667 ******* 2026-02-27 01:19:34.776258 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:19:34.776268 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:19:34.776279 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:19:34.776289 | orchestrator | 2026-02-27 01:19:34.776300 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-02-27 01:19:34.776311 | orchestrator | Friday 27 February 2026 01:16:34 +0000 (0:00:00.771) 0:02:03.438 ******* 2026-02-27 01:19:34.776321 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:19:34.776332 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:19:34.776343 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:19:34.776363 | orchestrator | 2026-02-27 01:19:34.776374 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-02-27 01:19:34.776385 | orchestrator | Friday 27 February 2026 01:16:36 +0000 (0:00:01.766) 0:02:05.205 ******* 2026-02-27 01:19:34.776396 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:19:34.776406 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:19:34.776416 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:19:34.776425 | orchestrator | 2026-02-27 01:19:34.776434 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-02-27 01:19:34.776444 | orchestrator | Friday 27 February 2026 01:16:37 +0000 (0:00:01.245) 0:02:06.450 ******* 2026-02-27 01:19:34.776453 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:19:34.776462 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:19:34.776472 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:19:34.776481 | orchestrator | 2026-02-27 01:19:34.776498 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-02-27 01:19:34.776514 | orchestrator | Friday 27 February 2026 01:16:38 +0000 (0:00:01.151) 0:02:07.602 ******* 2026-02-27 01:19:34.776530 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:19:34.776545 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:19:34.776561 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:19:34.776577 | orchestrator | 2026-02-27 01:19:34.776642 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-02-27 01:19:34.776657 | orchestrator | Friday 27 February 2026 01:16:41 +0000 (0:00:02.906) 0:02:10.508 ******* 2026-02-27 01:19:34.776667 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:19:34.776676 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:19:34.776685 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:19:34.776695 | orchestrator | 2026-02-27 01:19:34.776704 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-02-27 01:19:34.776714 | orchestrator | Friday 27 February 2026 01:16:43 +0000 (0:00:01.744) 0:02:12.253 ******* 2026-02-27 01:19:34.776723 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:19:34.776733 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:19:34.776742 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:19:34.776751 | orchestrator | 2026-02-27 01:19:34.776761 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-02-27 01:19:34.776771 | orchestrator | Friday 27 February 2026 01:16:44 +0000 (0:00:00.701) 0:02:12.955 ******* 2026-02-27 01:19:34.776780 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:19:34.776789 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:19:34.776799 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:19:34.776808 | orchestrator | 2026-02-27 01:19:34.776817 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-27 01:19:34.776827 | orchestrator | Friday 27 February 2026 01:16:47 +0000 (0:00:03.234) 0:02:16.189 ******* 2026-02-27 01:19:34.776837 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:19:34.776846 | orchestrator | 2026-02-27 01:19:34.776856 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-02-27 01:19:34.776865 | orchestrator | Friday 27 February 2026 01:16:48 +0000 (0:00:00.784) 0:02:16.974 ******* 2026-02-27 01:19:34.776875 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:19:34.776884 | orchestrator | 2026-02-27 01:19:34.776894 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-02-27 01:19:34.776903 | orchestrator | Friday 27 February 2026 01:16:52 +0000 (0:00:04.099) 0:02:21.074 ******* 2026-02-27 01:19:34.776913 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:19:34.776922 | orchestrator | 2026-02-27 01:19:34.776931 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-02-27 01:19:34.776941 | orchestrator | Friday 27 February 2026 01:16:55 +0000 (0:00:03.024) 0:02:24.098 ******* 2026-02-27 01:19:34.776950 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-02-27 01:19:34.776960 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-02-27 01:19:34.776980 | orchestrator | 2026-02-27 01:19:34.776990 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-02-27 01:19:34.776999 | orchestrator | Friday 27 February 2026 01:17:02 +0000 (0:00:07.260) 0:02:31.358 ******* 2026-02-27 01:19:34.777009 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:19:34.777018 | orchestrator | 2026-02-27 01:19:34.777028 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-02-27 01:19:34.777038 | orchestrator | Friday 27 February 2026 01:17:06 +0000 (0:00:03.937) 0:02:35.296 ******* 2026-02-27 01:19:34.777047 | orchestrator | ok: [testbed-node-0] 2026-02-27 01:19:34.777056 | orchestrator | ok: [testbed-node-1] 2026-02-27 01:19:34.777072 | orchestrator | ok: [testbed-node-2] 2026-02-27 01:19:34.777082 | orchestrator | 2026-02-27 01:19:34.777092 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-02-27 01:19:34.777101 | orchestrator | Friday 27 February 2026 01:17:06 +0000 (0:00:00.350) 0:02:35.647 ******* 2026-02-27 01:19:34.777114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-27 01:19:34.777154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-27 01:19:34.777189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-27 01:19:34.777200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-27 01:19:34.777219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-27 01:19:34.777234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-27 01:19:34.777245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-27 01:19:34.777255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-27 01:19:34.777290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-27 01:19:34.777302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-27 01:19:34.777319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-27 01:19:34.777333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-27 01:19:34.777344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:34.777355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:34.777387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:34.777398 | orchestrator | 2026-02-27 01:19:34.777408 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-02-27 01:19:34.777418 | orchestrator | Friday 27 February 2026 01:17:09 +0000 (0:00:02.584) 0:02:38.231 ******* 2026-02-27 01:19:34.777428 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:34.777437 | orchestrator | 2026-02-27 01:19:34.777447 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-02-27 01:19:34.777457 | orchestrator | Friday 27 February 2026 01:17:09 +0000 (0:00:00.149) 0:02:38.380 ******* 2026-02-27 01:19:34.777466 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:34.777475 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:34.777485 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:34.777494 | orchestrator | 2026-02-27 01:19:34.777504 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-02-27 01:19:34.777520 | orchestrator | Friday 27 February 2026 01:17:10 +0000 (0:00:00.539) 0:02:38.919 ******* 2026-02-27 01:19:34.777531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-27 01:19:34.777541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-27 01:19:34.777556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-27 01:19:34.777566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-27 01:19:34.777577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-27 01:19:34.777587 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:34.777620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-27 01:19:34.777637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-27 01:19:34.777648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-27 01:19:34.777667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-27 01:19:34.777678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-27 01:19:34.777688 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:34.777720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-27 01:19:34.777737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-27 01:19:34.777747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-27 01:19:34.777757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-27 01:19:34.777773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-27 01:19:34.777784 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:34.777793 | orchestrator | 2026-02-27 01:19:34.777803 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-27 01:19:34.777813 | orchestrator | Friday 27 February 2026 01:17:10 +0000 (0:00:00.771) 0:02:39.690 ******* 2026-02-27 01:19:34.777822 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-27 01:19:34.777832 | orchestrator | 2026-02-27 01:19:34.777841 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-02-27 01:19:34.777851 | orchestrator | Friday 27 February 2026 01:17:11 +0000 (0:00:00.632) 0:02:40.323 ******* 2026-02-27 01:19:34.777861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-27 01:19:34.777899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-27 01:19:34.777911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-27 01:19:34.777925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-27 01:19:34.777936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-27 01:19:34.777946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-27 01:19:34.777956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-27 01:19:34.777977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-27 01:19:34.777987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-27 01:19:34.777997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-27 01:19:34.778012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-27 01:19:34.778064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-27 01:19:34.778074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:34.778101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:34.778112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:34.778122 | orchestrator | 2026-02-27 01:19:34.778131 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-02-27 01:19:34.778141 | orchestrator | Friday 27 February 2026 01:17:17 +0000 (0:00:05.639) 0:02:45.963 ******* 2026-02-27 01:19:34.778151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-27 01:19:34.778187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-27 01:19:34.778198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-27 01:19:34.778208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-27 01:19:34.778229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-27 01:19:34.778240 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:34.778250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-27 01:19:34.778260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-27 01:19:34.778274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-27 01:19:34.778285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-27 01:19:34.778295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-27 01:19:34.778311 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:34.778326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-27 01:19:34.778337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-27 01:19:34.778347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-27 01:19:34.778361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-27 01:19:34.778372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-27 01:19:34.778381 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:34.778397 | orchestrator | 2026-02-27 01:19:34.778407 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-02-27 01:19:34.778416 | orchestrator | Friday 27 February 2026 01:17:18 +0000 (0:00:01.697) 0:02:47.661 ******* 2026-02-27 01:19:34.778426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-27 01:19:34.778443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-27 01:19:34.778453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-27 01:19:34.778463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-27 01:19:34.778477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-27 01:19:34.778487 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:34.778497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-27 01:19:34.778515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-27 01:19:34.778533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-27 01:19:34.778543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-27 01:19:34.778553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-27 01:19:34.778563 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:34.778577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-27 01:19:34.778593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-27 01:19:34.778603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-27 01:19:34.778620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-27 01:19:34.778631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-27 01:19:34.778640 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:34.778650 | orchestrator | 2026-02-27 01:19:34.778660 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-02-27 01:19:34.778669 | orchestrator | Friday 27 February 2026 01:17:20 +0000 (0:00:01.840) 0:02:49.501 ******* 2026-02-27 01:19:34.778679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-27 01:19:34.778694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-27 01:19:34.778710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-27 01:19:34.778725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-27 01:19:34.778736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-27 01:19:34.778746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-27 01:19:34.778755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-27 01:19:34.778775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-27 01:19:34.778785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-27 01:19:34.778795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-27 01:19:34.778810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-27 01:19:34.778821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-27 01:19:34.778831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:34.778844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:34.778860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:34.778870 | orchestrator | 2026-02-27 01:19:34.778880 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-02-27 01:19:34.778889 | orchestrator | Friday 27 February 2026 01:17:26 +0000 (0:00:05.366) 0:02:54.868 ******* 2026-02-27 01:19:34.778899 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-27 01:19:34.778909 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-27 01:19:34.778918 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-27 01:19:34.778928 | orchestrator | 2026-02-27 01:19:34.778937 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-02-27 01:19:34.778947 | orchestrator | Friday 27 February 2026 01:17:27 +0000 (0:00:01.808) 0:02:56.677 ******* 2026-02-27 01:19:34.778962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-27 01:19:34.778973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-27 01:19:34.778988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-27 01:19:34.779003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-27 01:19:34.779014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-27 01:19:34.779024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-27 01:19:34.779038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-27 01:19:34.779048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-27 01:19:34.779058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-27 01:19:34.779078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-27 01:19:34.779088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-27 01:19:34.779098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-27 01:19:34.779108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:34.779124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:34.779134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:34.779149 | orchestrator | 2026-02-27 01:19:34.779210 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-02-27 01:19:34.779222 | orchestrator | Friday 27 February 2026 01:17:49 +0000 (0:00:21.334) 0:03:18.011 ******* 2026-02-27 01:19:34.779232 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:19:34.779241 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:19:34.779251 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:19:34.779260 | orchestrator | 2026-02-27 01:19:34.779270 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-02-27 01:19:34.779280 | orchestrator | Friday 27 February 2026 01:17:50 +0000 (0:00:01.605) 0:03:19.617 ******* 2026-02-27 01:19:34.779289 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-27 01:19:34.779299 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-27 01:19:34.779308 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-27 01:19:34.779317 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-27 01:19:34.779327 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-27 01:19:34.779337 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-27 01:19:34.779347 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-27 01:19:34.779356 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-27 01:19:34.779371 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-27 01:19:34.779380 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-27 01:19:34.779390 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-27 01:19:34.779399 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-27 01:19:34.779408 | orchestrator | 2026-02-27 01:19:34.779418 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-02-27 01:19:34.779428 | orchestrator | Friday 27 February 2026 01:17:56 +0000 (0:00:05.742) 0:03:25.359 ******* 2026-02-27 01:19:34.779437 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-27 01:19:34.779447 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-27 01:19:34.779456 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-27 01:19:34.779465 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-27 01:19:34.779473 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-27 01:19:34.779481 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-27 01:19:34.779488 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-27 01:19:34.779496 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-27 01:19:34.779504 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-27 01:19:34.779511 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-27 01:19:34.779521 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-27 01:19:34.779534 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-27 01:19:34.779547 | orchestrator | 2026-02-27 01:19:34.779561 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-02-27 01:19:34.779573 | orchestrator | Friday 27 February 2026 01:18:02 +0000 (0:00:05.936) 0:03:31.296 ******* 2026-02-27 01:19:34.779586 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-27 01:19:34.779598 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-27 01:19:34.779610 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-27 01:19:34.779621 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-27 01:19:34.779632 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-27 01:19:34.779657 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-27 01:19:34.779670 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-27 01:19:34.779685 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-27 01:19:34.779700 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-27 01:19:34.779708 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-27 01:19:34.779716 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-27 01:19:34.779724 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-27 01:19:34.779732 | orchestrator | 2026-02-27 01:19:34.779739 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-02-27 01:19:34.779747 | orchestrator | Friday 27 February 2026 01:18:09 +0000 (0:00:06.682) 0:03:37.978 ******* 2026-02-27 01:19:34.779756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-27 01:19:34.779770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-27 01:19:34.779778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-27 01:19:34.779787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-27 01:19:34.779808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-27 01:19:34.779823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-27 01:19:34.779837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-27 01:19:34.779851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-27 01:19:34.779869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-27 01:19:34.779883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-27 01:19:34.779905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-27 01:19:34.779927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-27 01:19:34.779941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:34.779954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:34.779972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-27 01:19:34.779986 | orchestrator | 2026-02-27 01:19:34.779998 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-27 01:19:34.780010 | orchestrator | Friday 27 February 2026 01:18:14 +0000 (0:00:04.829) 0:03:42.807 ******* 2026-02-27 01:19:34.780018 | orchestrator | skipping: [testbed-node-0] 2026-02-27 01:19:34.780026 | orchestrator | skipping: [testbed-node-1] 2026-02-27 01:19:34.780034 | orchestrator | skipping: [testbed-node-2] 2026-02-27 01:19:34.780042 | orchestrator | 2026-02-27 01:19:34.780049 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-02-27 01:19:34.780057 | orchestrator | Friday 27 February 2026 01:18:14 +0000 (0:00:00.674) 0:03:43.481 ******* 2026-02-27 01:19:34.780065 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:19:34.780073 | orchestrator | 2026-02-27 01:19:34.780081 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-02-27 01:19:34.780095 | orchestrator | Friday 27 February 2026 01:18:17 +0000 (0:00:02.406) 0:03:45.888 ******* 2026-02-27 01:19:34.780103 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:19:34.780111 | orchestrator | 2026-02-27 01:19:34.780118 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-02-27 01:19:34.780126 | orchestrator | Friday 27 February 2026 01:18:19 +0000 (0:00:02.718) 0:03:48.607 ******* 2026-02-27 01:19:34.780134 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:19:34.780142 | orchestrator | 2026-02-27 01:19:34.780150 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-02-27 01:19:34.780158 | orchestrator | Friday 27 February 2026 01:18:22 +0000 (0:00:02.666) 0:03:51.273 ******* 2026-02-27 01:19:34.780193 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:19:34.780201 | orchestrator | 2026-02-27 01:19:34.780208 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-02-27 01:19:34.780216 | orchestrator | Friday 27 February 2026 01:18:25 +0000 (0:00:02.995) 0:03:54.269 ******* 2026-02-27 01:19:34.780224 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:19:34.780232 | orchestrator | 2026-02-27 01:19:34.780239 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-27 01:19:34.780247 | orchestrator | Friday 27 February 2026 01:18:48 +0000 (0:00:22.665) 0:04:16.935 ******* 2026-02-27 01:19:34.780255 | orchestrator | 2026-02-27 01:19:34.780263 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-27 01:19:34.780271 | orchestrator | Friday 27 February 2026 01:18:48 +0000 (0:00:00.070) 0:04:17.005 ******* 2026-02-27 01:19:34.780278 | orchestrator | 2026-02-27 01:19:34.780286 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-27 01:19:34.780294 | orchestrator | Friday 27 February 2026 01:18:48 +0000 (0:00:00.074) 0:04:17.079 ******* 2026-02-27 01:19:34.780302 | orchestrator | 2026-02-27 01:19:34.780309 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-02-27 01:19:34.780322 | orchestrator | Friday 27 February 2026 01:18:48 +0000 (0:00:00.075) 0:04:17.155 ******* 2026-02-27 01:19:34.780330 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:19:34.780340 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:19:34.780353 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:19:34.780373 | orchestrator | 2026-02-27 01:19:34.780388 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-02-27 01:19:34.780400 | orchestrator | Friday 27 February 2026 01:18:59 +0000 (0:00:11.276) 0:04:28.431 ******* 2026-02-27 01:19:34.780413 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:19:34.780425 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:19:34.780437 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:19:34.780451 | orchestrator | 2026-02-27 01:19:34.780463 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-02-27 01:19:34.780477 | orchestrator | Friday 27 February 2026 01:19:06 +0000 (0:00:07.124) 0:04:35.555 ******* 2026-02-27 01:19:34.780491 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:19:34.780503 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:19:34.780517 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:19:34.780526 | orchestrator | 2026-02-27 01:19:34.780534 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-02-27 01:19:34.780542 | orchestrator | Friday 27 February 2026 01:19:15 +0000 (0:00:09.104) 0:04:44.660 ******* 2026-02-27 01:19:34.780549 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:19:34.780557 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:19:34.780565 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:19:34.780573 | orchestrator | 2026-02-27 01:19:34.780580 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-02-27 01:19:34.780588 | orchestrator | Friday 27 February 2026 01:19:21 +0000 (0:00:05.648) 0:04:50.308 ******* 2026-02-27 01:19:34.780596 | orchestrator | changed: [testbed-node-0] 2026-02-27 01:19:34.780604 | orchestrator | changed: [testbed-node-2] 2026-02-27 01:19:34.780619 | orchestrator | changed: [testbed-node-1] 2026-02-27 01:19:34.780626 | orchestrator | 2026-02-27 01:19:34.780634 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 01:19:34.780642 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-27 01:19:34.780651 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-27 01:19:34.780659 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-27 01:19:34.780667 | orchestrator | 2026-02-27 01:19:34.780674 | orchestrator | 2026-02-27 01:19:34.780682 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 01:19:34.780690 | orchestrator | Friday 27 February 2026 01:19:32 +0000 (0:00:10.794) 0:05:01.103 ******* 2026-02-27 01:19:34.780698 | orchestrator | =============================================================================== 2026-02-27 01:19:34.780706 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 22.67s 2026-02-27 01:19:34.780718 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 21.33s 2026-02-27 01:19:34.780726 | orchestrator | octavia : Add rules for security groups -------------------------------- 17.09s 2026-02-27 01:19:34.780734 | orchestrator | octavia : Adding octavia related roles --------------------------------- 17.05s 2026-02-27 01:19:34.780742 | orchestrator | octavia : Create security groups for octavia --------------------------- 12.38s 2026-02-27 01:19:34.780750 | orchestrator | octavia : Restart octavia-api container -------------------------------- 11.28s 2026-02-27 01:19:34.780757 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.79s 2026-02-27 01:19:34.780765 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 9.10s 2026-02-27 01:19:34.780773 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 7.67s 2026-02-27 01:19:34.780780 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.63s 2026-02-27 01:19:34.780788 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.26s 2026-02-27 01:19:34.780796 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 7.12s 2026-02-27 01:19:34.780803 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 6.68s 2026-02-27 01:19:34.780811 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.50s 2026-02-27 01:19:34.780819 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.94s 2026-02-27 01:19:34.780826 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.92s 2026-02-27 01:19:34.780834 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.80s 2026-02-27 01:19:34.780842 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.74s 2026-02-27 01:19:34.780849 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.67s 2026-02-27 01:19:34.780857 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 5.65s 2026-02-27 01:19:37.810576 | orchestrator | 2026-02-27 01:19:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-27 01:19:40.855974 | orchestrator | 2026-02-27 01:19:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-27 01:19:43.896060 | orchestrator | 2026-02-27 01:19:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-27 01:19:46.932384 | orchestrator | 2026-02-27 01:19:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-27 01:19:49.981799 | orchestrator | 2026-02-27 01:19:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-27 01:19:53.026753 | orchestrator | 2026-02-27 01:19:53 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-27 01:19:56.070606 | orchestrator | 2026-02-27 01:19:56 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-27 01:19:59.113859 | orchestrator | 2026-02-27 01:19:59 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-27 01:20:02.161557 | orchestrator | 2026-02-27 01:20:02 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-27 01:20:05.207662 | orchestrator | 2026-02-27 01:20:05 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-27 01:20:08.260319 | orchestrator | 2026-02-27 01:20:08 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-27 01:20:11.308090 | orchestrator | 2026-02-27 01:20:11 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-27 01:20:14.354980 | orchestrator | 2026-02-27 01:20:14 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-27 01:20:17.398960 | orchestrator | 2026-02-27 01:20:17 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-27 01:20:20.441801 | orchestrator | 2026-02-27 01:20:20 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-27 01:20:23.486393 | orchestrator | 2026-02-27 01:20:23 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-27 01:20:26.529032 | orchestrator | 2026-02-27 01:20:26 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-27 01:20:29.571622 | orchestrator | 2026-02-27 01:20:29 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-27 01:20:32.611713 | orchestrator | 2026-02-27 01:20:32 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-02-27 01:20:35.652859 | orchestrator | 2026-02-27 01:20:36.010696 | orchestrator | 2026-02-27 01:20:36.016697 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Fri Feb 27 01:20:36 UTC 2026 2026-02-27 01:20:36.016783 | orchestrator | 2026-02-27 01:20:36.443670 | orchestrator | ok: Runtime: 0:37:35.681045 2026-02-27 01:20:37.024667 | 2026-02-27 01:20:37.024856 | TASK [Bootstrap services] 2026-02-27 01:20:37.835448 | orchestrator | 2026-02-27 01:20:37.835639 | orchestrator | # BOOTSTRAP 2026-02-27 01:20:37.835664 | orchestrator | 2026-02-27 01:20:37.835680 | orchestrator | + set -e 2026-02-27 01:20:37.835696 | orchestrator | + echo 2026-02-27 01:20:37.835711 | orchestrator | + echo '# BOOTSTRAP' 2026-02-27 01:20:37.835730 | orchestrator | + echo 2026-02-27 01:20:37.835774 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-02-27 01:20:37.845333 | orchestrator | + set -e 2026-02-27 01:20:37.845427 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-02-27 01:20:43.325370 | orchestrator | 2026-02-27 01:20:43 | INFO  | It takes a moment until task e8b828b6-6147-4954-b1ed-cf3dfa48bc8f (flavor-manager) has been started and output is visible here. 2026-02-27 01:20:51.464892 | orchestrator | 2026-02-27 01:20:46 | INFO  | Flavor SCS-1L-1 created 2026-02-27 01:20:51.465093 | orchestrator | 2026-02-27 01:20:46 | INFO  | Flavor SCS-1L-1-5 created 2026-02-27 01:20:51.465141 | orchestrator | 2026-02-27 01:20:47 | INFO  | Flavor SCS-1V-2 created 2026-02-27 01:20:51.465164 | orchestrator | 2026-02-27 01:20:47 | INFO  | Flavor SCS-1V-2-5 created 2026-02-27 01:20:51.465184 | orchestrator | 2026-02-27 01:20:47 | INFO  | Flavor SCS-1V-4 created 2026-02-27 01:20:51.465205 | orchestrator | 2026-02-27 01:20:47 | INFO  | Flavor SCS-1V-4-10 created 2026-02-27 01:20:51.465225 | orchestrator | 2026-02-27 01:20:47 | INFO  | Flavor SCS-1V-8 created 2026-02-27 01:20:51.465304 | orchestrator | 2026-02-27 01:20:48 | INFO  | Flavor SCS-1V-8-20 created 2026-02-27 01:20:51.465342 | orchestrator | 2026-02-27 01:20:48 | INFO  | Flavor SCS-2V-4 created 2026-02-27 01:20:51.465362 | orchestrator | 2026-02-27 01:20:48 | INFO  | Flavor SCS-2V-4-10 created 2026-02-27 01:20:51.465381 | orchestrator | 2026-02-27 01:20:48 | INFO  | Flavor SCS-2V-8 created 2026-02-27 01:20:51.465400 | orchestrator | 2026-02-27 01:20:48 | INFO  | Flavor SCS-2V-8-20 created 2026-02-27 01:20:51.465419 | orchestrator | 2026-02-27 01:20:48 | INFO  | Flavor SCS-2V-16 created 2026-02-27 01:20:51.465437 | orchestrator | 2026-02-27 01:20:48 | INFO  | Flavor SCS-2V-16-50 created 2026-02-27 01:20:51.465456 | orchestrator | 2026-02-27 01:20:48 | INFO  | Flavor SCS-4V-8 created 2026-02-27 01:20:51.465474 | orchestrator | 2026-02-27 01:20:49 | INFO  | Flavor SCS-4V-8-20 created 2026-02-27 01:20:51.465492 | orchestrator | 2026-02-27 01:20:49 | INFO  | Flavor SCS-4V-16 created 2026-02-27 01:20:51.465511 | orchestrator | 2026-02-27 01:20:49 | INFO  | Flavor SCS-4V-16-50 created 2026-02-27 01:20:51.465530 | orchestrator | 2026-02-27 01:20:49 | INFO  | Flavor SCS-4V-32 created 2026-02-27 01:20:51.465548 | orchestrator | 2026-02-27 01:20:49 | INFO  | Flavor SCS-4V-32-100 created 2026-02-27 01:20:51.465567 | orchestrator | 2026-02-27 01:20:50 | INFO  | Flavor SCS-8V-16 created 2026-02-27 01:20:51.465585 | orchestrator | 2026-02-27 01:20:50 | INFO  | Flavor SCS-8V-16-50 created 2026-02-27 01:20:51.465604 | orchestrator | 2026-02-27 01:20:50 | INFO  | Flavor SCS-8V-32 created 2026-02-27 01:20:51.465622 | orchestrator | 2026-02-27 01:20:50 | INFO  | Flavor SCS-8V-32-100 created 2026-02-27 01:20:51.465641 | orchestrator | 2026-02-27 01:20:50 | INFO  | Flavor SCS-16V-32 created 2026-02-27 01:20:51.465661 | orchestrator | 2026-02-27 01:20:50 | INFO  | Flavor SCS-16V-32-100 created 2026-02-27 01:20:51.465679 | orchestrator | 2026-02-27 01:20:50 | INFO  | Flavor SCS-2V-4-20s created 2026-02-27 01:20:51.465698 | orchestrator | 2026-02-27 01:20:51 | INFO  | Flavor SCS-4V-8-50s created 2026-02-27 01:20:51.465716 | orchestrator | 2026-02-27 01:20:51 | INFO  | Flavor SCS-8V-32-100s created 2026-02-27 01:20:53.841012 | orchestrator | 2026-02-27 01:20:53 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-02-27 01:20:53.948355 | orchestrator | 2026-02-27 01:20:53 | INFO  | Task 4080f8b5-54d3-4d8e-b0aa-97f594316b96 (bootstrap-basic) was prepared for execution. 2026-02-27 01:20:53.948440 | orchestrator | 2026-02-27 01:20:53 | INFO  | It takes a moment until task 4080f8b5-54d3-4d8e-b0aa-97f594316b96 (bootstrap-basic) has been started and output is visible here. 2026-02-27 01:21:41.765269 | orchestrator | 2026-02-27 01:21:41.765364 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-02-27 01:21:41.765371 | orchestrator | 2026-02-27 01:21:41.765375 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-27 01:21:41.765380 | orchestrator | Friday 27 February 2026 01:20:58 +0000 (0:00:00.075) 0:00:00.075 ******* 2026-02-27 01:21:41.765384 | orchestrator | ok: [localhost] 2026-02-27 01:21:41.765389 | orchestrator | 2026-02-27 01:21:41.765393 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-02-27 01:21:41.765397 | orchestrator | Friday 27 February 2026 01:21:00 +0000 (0:00:02.010) 0:00:02.086 ******* 2026-02-27 01:21:41.765401 | orchestrator | ok: [localhost] 2026-02-27 01:21:41.765405 | orchestrator | 2026-02-27 01:21:41.765409 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-02-27 01:21:41.765413 | orchestrator | Friday 27 February 2026 01:21:10 +0000 (0:00:09.482) 0:00:11.569 ******* 2026-02-27 01:21:41.765417 | orchestrator | changed: [localhost] 2026-02-27 01:21:41.765421 | orchestrator | 2026-02-27 01:21:41.765425 | orchestrator | TASK [Create public network] *************************************************** 2026-02-27 01:21:41.765430 | orchestrator | Friday 27 February 2026 01:21:17 +0000 (0:00:07.504) 0:00:19.073 ******* 2026-02-27 01:21:41.765434 | orchestrator | changed: [localhost] 2026-02-27 01:21:41.765437 | orchestrator | 2026-02-27 01:21:41.765441 | orchestrator | TASK [Set public network to default] ******************************************* 2026-02-27 01:21:41.765445 | orchestrator | Friday 27 February 2026 01:21:22 +0000 (0:00:05.049) 0:00:24.123 ******* 2026-02-27 01:21:41.765451 | orchestrator | changed: [localhost] 2026-02-27 01:21:41.765455 | orchestrator | 2026-02-27 01:21:41.765459 | orchestrator | TASK [Create public subnet] **************************************************** 2026-02-27 01:21:41.765463 | orchestrator | Friday 27 February 2026 01:21:29 +0000 (0:00:06.612) 0:00:30.735 ******* 2026-02-27 01:21:41.765467 | orchestrator | changed: [localhost] 2026-02-27 01:21:41.765471 | orchestrator | 2026-02-27 01:21:41.765474 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-02-27 01:21:41.765478 | orchestrator | Friday 27 February 2026 01:21:33 +0000 (0:00:04.504) 0:00:35.239 ******* 2026-02-27 01:21:41.765482 | orchestrator | changed: [localhost] 2026-02-27 01:21:41.765485 | orchestrator | 2026-02-27 01:21:41.765489 | orchestrator | TASK [Create manager role] ***************************************************** 2026-02-27 01:21:41.765506 | orchestrator | Friday 27 February 2026 01:21:37 +0000 (0:00:04.043) 0:00:39.283 ******* 2026-02-27 01:21:41.765510 | orchestrator | ok: [localhost] 2026-02-27 01:21:41.765514 | orchestrator | 2026-02-27 01:21:41.765518 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-27 01:21:41.765522 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-27 01:21:41.765527 | orchestrator | 2026-02-27 01:21:41.765531 | orchestrator | 2026-02-27 01:21:41.765535 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-27 01:21:41.765538 | orchestrator | Friday 27 February 2026 01:21:41 +0000 (0:00:03.720) 0:00:43.004 ******* 2026-02-27 01:21:41.765542 | orchestrator | =============================================================================== 2026-02-27 01:21:41.765546 | orchestrator | Get volume type LUKS ---------------------------------------------------- 9.48s 2026-02-27 01:21:41.765550 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.50s 2026-02-27 01:21:41.765553 | orchestrator | Set public network to default ------------------------------------------- 6.61s 2026-02-27 01:21:41.765557 | orchestrator | Create public network --------------------------------------------------- 5.05s 2026-02-27 01:21:41.765574 | orchestrator | Create public subnet ---------------------------------------------------- 4.50s 2026-02-27 01:21:41.765578 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.04s 2026-02-27 01:21:41.765582 | orchestrator | Create manager role ----------------------------------------------------- 3.72s 2026-02-27 01:21:41.765586 | orchestrator | Gathering Facts --------------------------------------------------------- 2.01s 2026-02-27 01:21:44.266067 | orchestrator | 2026-02-27 01:21:44 | INFO  | It takes a moment until task aa6fc771-c54e-48db-9f3e-b7a73612bf38 (image-manager) has been started and output is visible here. 2026-02-27 01:22:26.587001 | orchestrator | 2026-02-27 01:21:47 | INFO  | Processing image 'Cirros 0.6.2' 2026-02-27 01:22:26.587077 | orchestrator | 2026-02-27 01:21:47 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-02-27 01:22:26.587086 | orchestrator | 2026-02-27 01:21:47 | INFO  | Importing image Cirros 0.6.2 2026-02-27 01:22:26.587091 | orchestrator | 2026-02-27 01:21:47 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-02-27 01:22:26.587096 | orchestrator | 2026-02-27 01:21:49 | INFO  | Waiting for image to leave queued state... 2026-02-27 01:22:26.587102 | orchestrator | 2026-02-27 01:21:51 | INFO  | Waiting for import to complete... 2026-02-27 01:22:26.587106 | orchestrator | 2026-02-27 01:22:01 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-02-27 01:22:26.587111 | orchestrator | 2026-02-27 01:22:02 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-02-27 01:22:26.587115 | orchestrator | 2026-02-27 01:22:02 | INFO  | Setting internal_version = 0.6.2 2026-02-27 01:22:26.587120 | orchestrator | 2026-02-27 01:22:02 | INFO  | Setting image_original_user = cirros 2026-02-27 01:22:26.587125 | orchestrator | 2026-02-27 01:22:02 | INFO  | Adding tag os:cirros 2026-02-27 01:22:26.587129 | orchestrator | 2026-02-27 01:22:02 | INFO  | Setting property architecture: x86_64 2026-02-27 01:22:26.587135 | orchestrator | 2026-02-27 01:22:02 | INFO  | Setting property hw_disk_bus: scsi 2026-02-27 01:22:26.587143 | orchestrator | 2026-02-27 01:22:02 | INFO  | Setting property hw_rng_model: virtio 2026-02-27 01:22:26.587152 | orchestrator | 2026-02-27 01:22:03 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-27 01:22:26.587162 | orchestrator | 2026-02-27 01:22:03 | INFO  | Setting property hw_watchdog_action: reset 2026-02-27 01:22:26.587170 | orchestrator | 2026-02-27 01:22:03 | INFO  | Setting property hypervisor_type: qemu 2026-02-27 01:22:26.587177 | orchestrator | 2026-02-27 01:22:03 | INFO  | Setting property os_distro: cirros 2026-02-27 01:22:26.587184 | orchestrator | 2026-02-27 01:22:04 | INFO  | Setting property os_purpose: minimal 2026-02-27 01:22:26.587192 | orchestrator | 2026-02-27 01:22:04 | INFO  | Setting property replace_frequency: never 2026-02-27 01:22:26.587199 | orchestrator | 2026-02-27 01:22:04 | INFO  | Setting property uuid_validity: none 2026-02-27 01:22:26.587207 | orchestrator | 2026-02-27 01:22:04 | INFO  | Setting property provided_until: none 2026-02-27 01:22:26.587215 | orchestrator | 2026-02-27 01:22:05 | INFO  | Setting property image_description: Cirros 2026-02-27 01:22:26.587223 | orchestrator | 2026-02-27 01:22:05 | INFO  | Setting property image_name: Cirros 2026-02-27 01:22:26.587229 | orchestrator | 2026-02-27 01:22:05 | INFO  | Setting property internal_version: 0.6.2 2026-02-27 01:22:26.587234 | orchestrator | 2026-02-27 01:22:05 | INFO  | Setting property image_original_user: cirros 2026-02-27 01:22:26.587253 | orchestrator | 2026-02-27 01:22:05 | INFO  | Setting property os_version: 0.6.2 2026-02-27 01:22:26.587264 | orchestrator | 2026-02-27 01:22:06 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-02-27 01:22:26.587269 | orchestrator | 2026-02-27 01:22:06 | INFO  | Setting property image_build_date: 2023-05-30 2026-02-27 01:22:26.587273 | orchestrator | 2026-02-27 01:22:06 | INFO  | Checking status of 'Cirros 0.6.2' 2026-02-27 01:22:26.587278 | orchestrator | 2026-02-27 01:22:06 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-02-27 01:22:26.587282 | orchestrator | 2026-02-27 01:22:06 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-02-27 01:22:26.587286 | orchestrator | 2026-02-27 01:22:07 | INFO  | Processing image 'Cirros 0.6.3' 2026-02-27 01:22:26.587293 | orchestrator | 2026-02-27 01:22:07 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-02-27 01:22:26.587297 | orchestrator | 2026-02-27 01:22:07 | INFO  | Importing image Cirros 0.6.3 2026-02-27 01:22:26.587320 | orchestrator | 2026-02-27 01:22:07 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-02-27 01:22:26.587324 | orchestrator | 2026-02-27 01:22:08 | INFO  | Waiting for image to leave queued state... 2026-02-27 01:22:26.587328 | orchestrator | 2026-02-27 01:22:10 | INFO  | Waiting for import to complete... 2026-02-27 01:22:26.587342 | orchestrator | 2026-02-27 01:22:20 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-02-27 01:22:26.587347 | orchestrator | 2026-02-27 01:22:21 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-02-27 01:22:26.587351 | orchestrator | 2026-02-27 01:22:21 | INFO  | Setting internal_version = 0.6.3 2026-02-27 01:22:26.587355 | orchestrator | 2026-02-27 01:22:21 | INFO  | Setting image_original_user = cirros 2026-02-27 01:22:26.587359 | orchestrator | 2026-02-27 01:22:21 | INFO  | Adding tag os:cirros 2026-02-27 01:22:26.587363 | orchestrator | 2026-02-27 01:22:21 | INFO  | Setting property architecture: x86_64 2026-02-27 01:22:26.587367 | orchestrator | 2026-02-27 01:22:21 | INFO  | Setting property hw_disk_bus: scsi 2026-02-27 01:22:26.587371 | orchestrator | 2026-02-27 01:22:22 | INFO  | Setting property hw_rng_model: virtio 2026-02-27 01:22:26.587375 | orchestrator | 2026-02-27 01:22:22 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-27 01:22:26.587379 | orchestrator | 2026-02-27 01:22:22 | INFO  | Setting property hw_watchdog_action: reset 2026-02-27 01:22:26.587383 | orchestrator | 2026-02-27 01:22:22 | INFO  | Setting property hypervisor_type: qemu 2026-02-27 01:22:26.587387 | orchestrator | 2026-02-27 01:22:22 | INFO  | Setting property os_distro: cirros 2026-02-27 01:22:26.587391 | orchestrator | 2026-02-27 01:22:23 | INFO  | Setting property os_purpose: minimal 2026-02-27 01:22:26.587395 | orchestrator | 2026-02-27 01:22:23 | INFO  | Setting property replace_frequency: never 2026-02-27 01:22:26.587400 | orchestrator | 2026-02-27 01:22:23 | INFO  | Setting property uuid_validity: none 2026-02-27 01:22:26.587404 | orchestrator | 2026-02-27 01:22:23 | INFO  | Setting property provided_until: none 2026-02-27 01:22:26.587408 | orchestrator | 2026-02-27 01:22:24 | INFO  | Setting property image_description: Cirros 2026-02-27 01:22:26.587412 | orchestrator | 2026-02-27 01:22:24 | INFO  | Setting property image_name: Cirros 2026-02-27 01:22:26.587416 | orchestrator | 2026-02-27 01:22:24 | INFO  | Setting property internal_version: 0.6.3 2026-02-27 01:22:26.587424 | orchestrator | 2026-02-27 01:22:24 | INFO  | Setting property image_original_user: cirros 2026-02-27 01:22:26.587428 | orchestrator | 2026-02-27 01:22:24 | INFO  | Setting property os_version: 0.6.3 2026-02-27 01:22:26.587432 | orchestrator | 2026-02-27 01:22:25 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-02-27 01:22:26.587436 | orchestrator | 2026-02-27 01:22:25 | INFO  | Setting property image_build_date: 2024-09-26 2026-02-27 01:22:26.587440 | orchestrator | 2026-02-27 01:22:25 | INFO  | Checking status of 'Cirros 0.6.3' 2026-02-27 01:22:26.587444 | orchestrator | 2026-02-27 01:22:25 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-02-27 01:22:26.587448 | orchestrator | 2026-02-27 01:22:25 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-02-27 01:22:27.042866 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-02-27 01:22:29.591403 | orchestrator | 2026-02-27 01:22:29 | INFO  | date: 2026-02-26 2026-02-27 01:22:29.591494 | orchestrator | 2026-02-27 01:22:29 | INFO  | image: octavia-amphora-haproxy-2024.2.20260226.qcow2 2026-02-27 01:22:29.592258 | orchestrator | 2026-02-27 01:22:29 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260226.qcow2 2026-02-27 01:22:29.592288 | orchestrator | 2026-02-27 01:22:29 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260226.qcow2.CHECKSUM 2026-02-27 01:22:29.793116 | orchestrator | 2026-02-27 01:22:29 | INFO  | checksum: localhost | ok: "/var/lib/zuul/builds/be8ea8ba42aa40fdb20d16250c2e0f7e/work/logs" 2026-02-27 01:23:07.520540 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/be8ea8ba42aa40fdb20d16250c2e0f7e/work/artifacts" 2026-02-27 01:23:07.797478 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/be8ea8ba42aa40fdb20d16250c2e0f7e/work/docs" 2026-02-27 01:23:07.817294 | 2026-02-27 01:23:07.817444 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-02-27 01:23:08.746433 | orchestrator | changed: .d..t...... ./ 2026-02-27 01:23:08.746698 | orchestrator | changed: All items complete 2026-02-27 01:23:08.746752 | 2026-02-27 01:23:09.474245 | orchestrator | changed: .d..t...... ./ 2026-02-27 01:23:10.179592 | orchestrator | changed: .d..t...... ./ 2026-02-27 01:23:10.207439 | 2026-02-27 01:23:10.207632 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-02-27 01:23:10.250923 | orchestrator | skipping: Conditional result was False 2026-02-27 01:23:10.255868 | orchestrator | skipping: Conditional result was False 2026-02-27 01:23:10.272919 | 2026-02-27 01:23:10.273059 | PLAY RECAP 2026-02-27 01:23:10.273179 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-02-27 01:23:10.273212 | 2026-02-27 01:23:10.440658 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-02-27 01:23:10.441866 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-02-27 01:23:11.193751 | 2026-02-27 01:23:11.193922 | PLAY [Base post] 2026-02-27 01:23:11.208929 | 2026-02-27 01:23:11.209075 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-02-27 01:23:12.278386 | orchestrator | changed 2026-02-27 01:23:12.288436 | 2026-02-27 01:23:12.288573 | PLAY RECAP 2026-02-27 01:23:12.288653 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-02-27 01:23:12.288753 | 2026-02-27 01:23:12.437904 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-02-27 01:23:12.439172 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-02-27 01:23:13.265005 | 2026-02-27 01:23:13.265217 | PLAY [Base post-logs] 2026-02-27 01:23:13.276771 | 2026-02-27 01:23:13.276936 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-02-27 01:23:13.771140 | localhost | changed 2026-02-27 01:23:13.782289 | 2026-02-27 01:23:13.782463 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-02-27 01:23:13.820456 | localhost | ok 2026-02-27 01:23:13.824024 | 2026-02-27 01:23:13.824139 | TASK [Set zuul-log-path fact] 2026-02-27 01:23:13.842260 | localhost | ok 2026-02-27 01:23:13.851387 | 2026-02-27 01:23:13.851501 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-27 01:23:13.887801 | localhost | ok 2026-02-27 01:23:13.891277 | 2026-02-27 01:23:13.891410 | TASK [upload-logs : Create log directories] 2026-02-27 01:23:14.442827 | localhost | changed 2026-02-27 01:23:14.447764 | 2026-02-27 01:23:14.447941 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-02-27 01:23:14.969073 | localhost -> localhost | ok: Runtime: 0:00:00.006992 2026-02-27 01:23:14.974192 | 2026-02-27 01:23:14.974328 | TASK [upload-logs : Upload logs to log server] 2026-02-27 01:23:15.578067 | localhost | Output suppressed because no_log was given 2026-02-27 01:23:15.582613 | 2026-02-27 01:23:15.582826 | LOOP [upload-logs : Compress console log and json output] 2026-02-27 01:23:15.651236 | localhost | skipping: Conditional result was False 2026-02-27 01:23:15.657430 | localhost | skipping: Conditional result was False 2026-02-27 01:23:15.669454 | 2026-02-27 01:23:15.669703 | LOOP [upload-logs : Upload compressed console log and json output] 2026-02-27 01:23:15.730336 | localhost | skipping: Conditional result was False 2026-02-27 01:23:15.730973 | 2026-02-27 01:23:15.734683 | localhost | skipping: Conditional result was False 2026-02-27 01:23:15.743455 | 2026-02-27 01:23:15.743851 | LOOP [upload-logs : Upload console log and json output]